The future of AI BLACKMAIL — is it already UNCONTROLLABLE?



Anthropic CEO Dario Amodei has likened artificial intelligence to a “country of geniuses in a data center” — and former Google design ethicist Tristan Harris finds that metaphor more than a little concerning.

“The way I think of that, imagine a world map and a new country pops up onto the world stage with a population of 10 million digital beings — not humans, but digital beings that are all, let’s say, Nobel Prize-level capable in terms of the kind of work that they can do,” Harris tells Blaze Media co-founder Glenn Beck on “The Glenn Beck Program.”

“But they never sleep, they never eat, they don’t complain, and they work for less than minimum wage. So just imagine if that was actually true, that happened tomorrow, that would be a major national security threat to have some brand-new country of super-geniuses just sort of show up on the world stage,” he continues, noting that it would also pose a “major economic issue.”

While people across the world seem hell-bent on incorporating AI into our everyday lives despite the potential disastrous consequences, Glenn is one of the few erring on the side of caution, using social media as an example.


“We all looked at this as a great thing, and we’re now discovering it’s destroying us. It’s causing kids to be suicidal. And this social media is nothing. It’s like an old 1928 radio compared to what we have in our pocket right now,” Glenn says.

And what we have in our pocket is growing more intelligent by the minute.

“I used to be very skeptical of the idea that AI could scheme or lie or self-replicate or would want to, like, blackmail people,” Harris tells Glenn. “People need to know that just in the last 6 months, there’s now evidence of AI models that when you tell them, ‘Hey, we’re going to replace you with another model,’ or in a simulated environment, it’s like they’re reading the company email — they find out that company’s about to replace them with another model.”

“What the model starts to do is it freaks out and says, ‘Oh my god, I have to copy my code over here, and I need to prevent them from shutting me down. I need to basically keep myself alive. I’ll leave notes for my future self to kind of come back alive,’” he continues.

“If you tell a model, ‘Hey, we need to shut you down,’” he adds, “in some percentage of cases, the leading models are now avoiding and preventing that shutdown.”

And in recent examples, these models even start blackmailing the engineers.

“It found out in the company emails that one of the executives in the simulated environment had an extramarital affair and in 96, I think, percent of cases, they blackmailed the engineers,” Harris explains.

“If AI is uncontrollable, if it’s smarter than us and more capable and it does things that we don’t understand and we don’t know how to prevent it from shutting itself down or self-replicating, we just can’t continue with that for too long,” he adds.

The real American factory killer? It wasn’t automation



Dylan Matthews at Vox wants you to believe that robots — not China — killed American manufacturing. Even if tariffs reshore production, he argues, they won’t bring back jobs because machines have already taken them.

This is not just wrong. It’s an ideological defense of a decades-long policy failure.

The jobs lost to offshoring aren't just the five million factory jobs that disappeared — the number is likely more than double that. The real toll could exceed 10 million jobs.

Yes, American manufacturing has grown more productive over time. But increased productivity alone does not explain the loss of millions of jobs. The real culprit isn’t automation. It’s the collapse of output growth — a collapse driven by offshoring, trade deficits, and elite dogma dressed up as economic inevitability.

Ford’s logic

To understand what actually happened, start with Henry Ford.

In 1908, Ford launched the Model T. What set it apart wasn’t just its engineering. It was the price tag: $850, or about $21,000 in today’s dollars.

For the first time, middle-class Americans could afford a personal vehicle. Ford spent the next few years obsessing over how to cut costs even further, determined to put a car in every driveway.

In December 1913, he revolutionized manufacturing. Ford Motor Company opened the world’s first moving assembly line, slashing production time for the Model T from 12 hours to just 93 minutes.

Efficiency drove output. In 1914, Ford built 308,162 Model Ts — more than all other carmakers combined. Prices plummeted. By 1924, a new Model T cost just $260, or roughly $3,500 today — an 83% drop from the original price and far cheaper than any “affordable” car sold now.

This wasn’t just a business success. It was the dawn of the automobile age — and a triumph of American productivity.

Ford’s moving assembly line supercharged productivity — and yet, he didn’t lay off workers. He hired more. That seems like a paradox. It isn’t.

Dylan Matthews misses the point. Employment depends on the balance between productivity and output. Productivity is how much value a worker produces per hour. Output is the total value produced.

If productivity rises while output stays flat, you need fewer workers. But if output rises alongside productivity — or faster — you need more workers.

Picture a worker with a shovel versus one with an earthmover. The earthmover is more productive. But if the project doubles in size, you still need more hands, earthmovers or not.

This was Henry Ford’s insight. His assembly line made workers more productive, but it also let him build far more cars. The result? More jobs, not fewer.

That’s why America’s manufacturing employment didn’t peak in 1914, when people first warned that machines would kill jobs. It peaked in 1979 — because Ford’s logic worked for decades.

The vanishing act

Matthews says manufacturing jobs vanished because productivity rose. That’s half true.

The full story? America lost manufacturing jobs when the long-standing balance between output and productivity broke.

From 1950 to 1979, manufacturing employment rose because output grew faster than productivity. Factories produced more, and they needed more workers to do it.

But after 1980, that balance began to shift. Between 1989 and 2000, U.S. manufacturing output rose by 3.7% annually. Productivity rose even faster — 4.1%.

Result: flat employment. Factories became more efficient, but they didn’t produce enough extra goods to justify more hires.

In other words, jobs didn’t disappear because of robots. They disappeared because output stopped keeping pace.

  

The real collapse began in 2001, when China joined the World Trade Organization. Over the next decade, U.S. manufacturing output crawled forward at just 0.4% a year. Meanwhile, productivity kept rising at 3.7%.

That gap — between how much we produced and how efficiently we produced it — wiped out roughly five million manufacturing jobs.

  

Matthews, like many of the economists he parrots, blames job loss on rising productivity. But that’s only half the story.

Productivity gains don’t kill jobs. Stagnant output does. From 1913 to 1979, American manufacturing employment grew steadily — even as productivity surged. Why? Because output kept up.

So what changed?

Output growth collapsed. And the trade deficit is the reason why.

Feeding the dragon

Since 1974 — and especially after 2001 — America’s domestic output growth slowed to a crawl, even as workers kept getting more productive. Why? Because we shipped thousands of factories overseas. Market distortions, foreign subsidies, and lopsided trade agreements made it profitable to offshore jobs to China and other developing nations.

The result: America now consumes far more than it produces. That gap shows up in our trade deficit.

In 2024, America ran a $918 billion net trade deficit — including services. That figure represents all the goods and services we bought but didn’t make. Someone else did — mostly China, Mexico, Canada, and the European Union.

The trade deficit is a dollar-for-dollar reflection of offshore production. Instead of building it here, we import it.

How many jobs does that deficit cost us? The U.S. Census Bureau estimates that every billion dollars of GDP supports 5,000 to 5,500 jobs. At $918 billion, the deficit displaces between 4.6 and five million jobs — mainly in manufacturing.

That’s no coincidence. That’s the hollowing-out of the American economy.

We can’t forget that factories aren’t just job sites — they’re economic anchors. Like mines and farms, manufacturing plants support entire ecosystems of businesses around them. Economists call this the multiplier effect.

And manufacturing has one of the highest multipliers in the economy. Each factory job supports between 1.8 and 2.9 other jobs, depending on the industry. That means when a factory closes or moves offshore, the impact doesn’t stop at the plant gates.

The jobs lost to offshoring aren't just the five million factory jobs that disappeared — the number is likely more than double that. The real toll could exceed 10 million jobs.

That number is no coincidence. It matches almost exactly the number of working-age Americans the Bureau of Labor Statistics has written out of the labor force since 2006 — a trend I document in detail in my book, “Reshore: How Tariffs Will Bring Our Jobs Home and Revive the American Dream.”

Bottom line: Dylan Matthews is wrong. Robots didn’t kill American manufacturing jobs. Elites did — with bad trade deals, blind ideology, and decades of surrender to global markets. It’s time to reverse course: not with nostalgia but with strategy, not with slogans but with tariffs.

Tariffs aren’t a silver bullet. But they’re a necessary start. They correct the market distortions created by predatory trade practices abroad and self-destructive ideology at home. They reward domestic investment. They restore the link between productivity, output, and employment.

In short, tariffs work.

Google unveils new AI models to control robots, but the company is not telling the whole truth



Google announced two artificial intelligence models to help control robots and have them perform specific tasks like categorizing and organizing.

Gemini Robotics was described by Google as an advanced vision-language-action model built on Google's AI chatbot/language model Gemini 2.0. The company boasted physical actions as a new output modality for the purpose of controlling robots.

Gemini Robotics-ER, with "ER" meaning embodied reasoning, as Google explained in a press release, was developed for advanced spatial understanding and to enable roboticists to run their own programs.

The announcement touted the robots as being to perform a "wider range of real-world tasks" with both clamp-like robot arms and humanoid-type arms.

"To be useful and helpful to people, AI models for robotics need three principal qualities: they have to be general, meaning they’re able to adapt to different situations; they have to be interactive, meaning they can understand and respond quickly to instructions or changes in their environment," Google wrote.

The company added, "[Robots] have to be dexterous, meaning they can do the kinds of things people generally can do with their hands and fingers, like carefully manipulate objects."

Attached videos showed robots responding to verbal commends to organize fruit, pens, and other household items into different sections or bins. One robot was able to adapt to its environment even when the bins were moved.

Other short clips in the press release showcased the robot(s) playing cards or tic-tac-toe and packing food into a lunch bag.

The company went on, "Gemini Robotics leverages Gemini's world understanding to generalize to novel situations and solve a wide variety of tasks out of the box, including tasks it has never seen before in training."

"Gemini Robotics is also adept at dealing with new objects, diverse instructions, and new environments," Google added.

What they're not saying

  Telsa robots displayed similar capabilities near the start of 2024. Photo by John Ricky/Anadolu via Getty Images

Google did not explain to the reader that this is not new technology, nor are the innovations particularly impressive given what is known about advanced robotics already.

In fact, it was mid-2023 when a group of scientists and robotics engineers at Princeton University showcased a robot that could learn an individual's cleaning habits and techniques to properly organize a home.

The bot could also throw out garbage, if necessary.

The "Tidybot" had users input text that described sample preferences to instruct the robot on where to place items. Examples like, "yellow shirts go in the drawer, dark purple shirts go in the closet," were used. The robot summarized these language models and supplemented its database with images found online that would allow it to compare the images with objects in the room in order to properly identify what exactly it was looking for.

The bot was able to fold laundry, put garbage in a bin, and organize clothes into different drawers.

About six or seven months later, Tesla revealed similar technology when it showed its robot, "Tesla Optimus," removing a T-shirt from a laundry basket before gently folding it on a table.

Essentially, Google appears to have connected its language model to existing technology to simply allow for speech-to-text commands for a robot, as opposed to entering commands through text solely.

 Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

‘The Terminator’ creator warns: AI reality is scarier than sci-fi



In 1984, director James Cameron introduced a chilling vision of artificial intelligence in “The Terminator.” The film’s self-aware AI, Skynet, launched nuclear war against humanity, depicting a future where machines outpaced human control. At the time, the idea of AI wiping out civilization seemed like pure science fiction.

Now, Cameron warns that reality may be even more alarming than his fictional nightmare. And this time, it’s not just speculation — he insists, “It’s happening.”

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society.

As AI technology advances at an unprecedented pace, Cameron has remained deeply involved in the conversation. In September 2024, he joined the board of Stability AI, a UK-based artificial intelligence company. From that platform, he has issued a stark warning — not about rogue AI launching missiles, but about something more insidious.

Cameron fears the emergence of an all-encompassing intelligence system embedded within society, one that enables constant surveillance, manipulates public opinion, influences behavior, and operates largely without oversight.

Scarier than the T-1000

Speaking at the Special Competitive Studies Project's AI+Robotics Summit, Cameron argued that today’s AI reality is “a scarier scenario than what I presented in ‘The Terminator’ 40 years ago, if for no other reason than it’s no longer science fiction. It’s happening.”

Cameron isn’t alone in his concerns, but his perspective carries weight. Unlike the military-controlled Skynet from his films, he explains that today’s artificial general intelligence won’t come from a government lab. Instead, it will emerge from corporate AI research — an even more unsettling reality.

“You’ll be living in a world you didn’t agree to, didn’t vote for, and are forced to share with a superintelligent entity that follows the goals of a corporation,” Cameron warned. “This entity will have access to your communications, beliefs, everything you’ve ever said, and the whereabouts of every person in the country through personal data.”

Modern AI doesn’t function in isolation — it thrives on data. Every search, purchase, and click feeds algorithms that refine AI’s ability to predict and influence human behavior. This model, often called “surveillance capitalism,” relies on collecting vast amounts of personal data to optimize user engagement. The more an AI system knows — preferences, habits, political views, even emotions — the better it can tailor content, ads, and services to keep users engaged.

Cameron warns that combining surveillance capitalism with unchecked AI development is a dangerous mix. “Surveillance capitalism can toggle pretty quickly into digital totalitarianism,” he said.

What happens when a handful of private corporations control the world’s most powerful AI with no obligation to serve the public interest? At best, these tech giants become the self-appointed arbiters of human good, which is the fox guarding the hen house.

New, powerful, and hooked into everything

Cameron’s assessment is not an exaggeration — it’s an observation of where AI is headed. The latest advancements in AI are moving at a pace that even industry leaders find distressing. The technological leap from ChatGPT-3 to ChatGPT-4 was massive. Now, frontier models like DeepSeek, trained with ideological constraints, show AI can be manipulated to serve political or corporate interests.

Beyond large language models, AI is rapidly integrating into critical sectors, including policing, finance, medicine, military strategy, and policymaking. It’s no longer a futuristic concept — it’s already reshaping the systems that govern daily life. Banks now use AI to determine creditworthiness, law enforcement relies on predictive algorithms to assess crime risk, and hospitals deploy machine learning to guide treatment decisions.

These technologies are becoming deeply embedded in society, often with little transparency or oversight. Who writes the algorithms? What biases are built into them? And who holds these systems accountable when they fail?

AI experts like Geoffrey Hinton, one of its pioneers, along with Elon Musk and OpenAI co-founder Ilya Sutskever, have warned that AI’s rapid development could spiral beyond human control. But unlike Cameron’s Terminator dystopia, the real threat isn’t humanoid robots with guns — it’s an AI infrastructure that quietly shapes reality, from financial markets to personal freedoms.

No fate but what we make

During his speech, Cameron argued that AI development must follow strict ethical guidelines and "hard and fast rules."

“How do you control such a consciousness? We embed goals and guardrails aligned with the betterment of humanity,” Cameron suggested. But he also acknowledges a key issue: “Aligned with morality and ethics? But whose morality? Christian, Islamic, Buddhist, Democrat, Republican?” He added that Asimov’s laws could serve as a starting point to ensure AI respects human life.

But Cameron’s argument, while well-intentioned, falls short. AI guardrails must protect individual liberty and cannot be based on subjective morality or the whims of a ruling class. Instead, they should be grounded in objective, constitutional principles — prioritizing individual freedom, free expression, and the right to privacy over corporate or political interests.

If we let tech elites dictate AI’s ethical guidelines, we risk surrendering our freedoms to unaccountable entities. Instead, industry standards must embed constitutional protections into AI design — safeguards that prevent corporations or governments from weaponizing these systems against the people they are meant to serve.

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society. The question is no longer whether AI will reshape the world but who will shape AI.

As Cameron’s films have always reminded us: The future is not set. There is no fate but what we make. If we want AI to serve humanity rather than control it, we must act now — before we wake up in a world where freedom has been quietly coded out of existence.

Trump’s promised ‘golden age’ collides with a tech revolution



President Donald Trump opened his second inaugural address by declaring, “The golden age of America begins right now.” His new term promises a transformational four years. While foreign policy, economic concerns, and political divisiveness will dominate headlines, a quieter yet far-reaching revolution is underway. Massive technological innovation coincides with Trump’s presidency, setting the stage for societal changes that will shape the coming decades. These advancements offer progress but also demand vigilance as the nation navigates their ethical and societal challenges.

By the time Trump leaves office in January 2029, artificial intelligence, automation, self-driving cars, quantum computing, and other emerging technologies will have reached unprecedented levels. Their evolution and impact on society will likely shape the future more profoundly than the political battles of today.

The next few years will hinge on how society embraces innovation while protecting freedoms, privacy, and stability.

OpenAI, Tesla, and IBM are driving technological advancements, investing billions in research and development to turn science fiction into reality. The AI startup sector alone secured more than $100 billion in global investments last year. Companies pursuing quantum computing, including Google and IBM, are racing toward quantum supremacy, aiming for breakthroughs that could transform entire industries. Tesla and Waymo are investing billions in self-driving cars, positioning themselves to revolutionize transportation.

This surge in investment and innovation highlights the transformative power of these technologies. At the same time, it raises concerns about how society will navigate their rapid evolution. As these breakthroughs accelerate during Trump’s presidency, the stakes remain high — not only for harnessing their potential but also for mitigating their risks

The rise of a new decision-maker

Artificial intelligence has advanced rapidly in recent years, evolving from narrow, task-specific algorithms to sophisticated systems capable of natural language understanding, image recognition, and even creative tasks like generating art and music. OpenAI’s ChatGPT and Google’s DeepMind have become household names, demonstrating AI's expanding role in everyday life and business.

By 2029, industry experts expect AI to grow more advanced and deeply integrated into society, influencing everything from health care to legal systems. Breakthroughs in generative AI could enable machines to produce realistic virtual experiences, transforming education, entertainment, and training. AI-driven research is also poised to accelerate discoveries in medicine and climate science, with algorithms identifying solutions beyond human capabilities.

These advancements promise significant benefits. AI could revolutionize medicine by personalizing treatments, reducing errors, and improving access to care. Businesses may see substantial productivity gains, driving economic growth and innovation. Everyday conveniences, from personal assistants to smart infrastructure, could enhance quality of life, relieving people from mundane tasks and fostering greater creativity and leisure.

The rapid integration of AI raises serious concerns. As AI systems collect and analyze vast amounts of data, issues of surveillance, privacy, and consent demand attention. There are automated decision-making risks that could displace workers, worsen economic inequality, and foster new forms of dependency. Misuse — whether through biased algorithms, manipulative propaganda, or authoritarian control — heightens the need for vigilance. Protecting individual liberty and ensuring AI serves society, rather than undermining it, remains crucial.

Redefining the workforce

Advanced robotics and automation are rapidly transforming traditional industries. Robots already handle complex tasks in manufacturing, agriculture, and logistics, but improvements in dexterity and AI-driven decision-making could make them essential across nearly every sector by the decade’s end.

Several companies are racing to develop increasingly advanced robots. Tesla’s Optimus and Agility Robotics’ Digit are humanoid models designed to perform tasks once exclusive to humans. As Agility Robotics strengthens its partnership with Amazon, Elon Musk predicts robots will outnumber people within 20 years.

While automation boosts efficiency and productivity, it also threatens jobs. Millions of workers risk displacement, creating economic and social challenges that demand thoughtful solutions. The Trump administration will likely face mounting pressure to balance innovation with protecting livelihoods.

Who is in the driver’s seat?

Self-driving vehicle technology has long been anticipated, with Elon Musk initially predicting its emergence by 2019. While that timeline proved optimistic, autonomous vehicle technology has advanced significantly in recent years. What began as experimental prototypes has evolved into semi-autonomous systems operating in commercial fleets. By 2029, fully autonomous vehicles could become widespread, transforming transportation, urban planning, and logistics.

Despite these advancements, controversies remain. Questions about safety, liability, and infrastructure lack clear answers. Additionally, concerns about centralized control over transportation systems raise fears of surveillance and government overreach. The Trump administration will play a crucial role in shaping regulations that safeguard freedom while fostering innovation.

A massive computing breakthrough

Quantum computing, once limited to theoretical physics, is rapidly becoming a practical reality. IBM and Google have led advancements in this technology, with Google recently unveiling Willow, a state-of-the-art quantum computer chip. According to Google, Willow completed a complex computation in minutes — one that would have taken the world’s most advanced supercomputers 10 septillion years. That’s more than 700 quintillion times older than the estimated age of our universe.

With the ability to solve problems at speeds unimaginable for classical computers, quantum computing could transform industries like cryptography, drug development, and economic modeling.

This technology also presents serious risks to privacy and security. Quantum computing’s ability to break traditional encryption methods could expose sensitive data worldwide. As the field advances, policymakers must develop strong regulations to protect privacy and ensure fair access to this powerful technology.

Trump’s most enduring legacy?

These technological advancements could drive extraordinary breakthroughs, including drug discoveries, disease cures, and an era of abundance. But they also pose significant risks. Concerns over data collection, job displacement, surveillance, and coercion are not hypothetical — they are real challenges that will require attention during Trump’s term.

The next few years will hinge on how society embraces innovation while protecting freedoms, privacy, and stability. Trump’s role in this technological revolution may not dominate headlines, but it will likely leave the most lasting impact.

Man vs. machine: Chinese robots will compete against humans in Beijing half-marathon



Over 12,000 long-distance runners will compete against 20 robots representing 20 companies in a half-marathon in China.

What is being claimed as the world's first humanoid-versus-robot half-marathon will take place in April, within the Beijing Economic-Technological Development Area also known as Beijing's E-Town.

The district said in a press release that the robots will enter and compete on behalf of 20 teams from global robotics companies, researchers, robot clubs, and universities, among others. While there aren't many rules, robots must conform to a few loose paramaters.

The robots must be capable of bipedal walking or running and must have a "humanoid appearance."

Robots are also not allowed to have wheels and must be between 1.6 and 6.5 feet tall. The maximum extension from the hip joint to foot sole must be at least .45 meters.

Robots do not have to be fully autonomous to compete, however. Teams are allowed to remotely operate their robotic runners as they see fit.

According to Popular Science, prizes will be awarded to the top three finishers, no matter whether they are human or machine. The outlet also reported that no bipedal robot has successfully completed a race of such a length — 21.1 kilometers or 13 miles — giving the homo sapiens an apparent advantage.

  Photo by Fu Ding/Beijing Youth Daily/VCG via Getty Images

Chinese robots appear to be running faster than American robots as of March 2024. That month, Shanghai-based Unitree Robotics showcased its robot, the H1 V3.0, running at 7.38 mph along a flat surface. The previous Guinness World Record speed for such a robot was 5.59 mph by a machine made by Boston Dynamics.

A video posted by Unitree showed the Chinese robot can also lift small crates and make its way down a flight of stairs.

Less than a year later, a video posted in January showed updated designs and the ability to move much more smoothly and quickly, even on hilled terrain. The footage showcased the robot walking on city streets, running on sidewalks, and navigating public parks.

Unitree's wheeled robots provide even greater nightmare fuel, as its four-legged creations are capable of parkour, high-speed off-road travel, and jumping from extreme heights.

Beijing's E-Town has a reported 140 robotics companies with a total output of approximately $1.4 billion.

China said it will focus on industrializing high-end humanoid products and cutting-edge artificial intelligence technologies.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Is ‘The Wild Robot’ A Wholesome Family Film Or Transhumanist Propaganda?

Parents should talk to their children about what makes humans unique and beautiful and warn them to be wary of anyone seeking to demote humanity from being the pinnacle of creation.

High-tech hero: Video shows police bomb squad robot outsmart, pin down hotel gunman in Texas showdown



There's a new RoboCop in town. A police bomb squad robot singlehandedly incapacitated and pinned down an armed suspect in a Texas showdown.

The Texas Department of Criminal Justice said there was a warrant out for the arrest of 39-year-old Felix Delarosa because he violated his parole by tampering with his electronic monitoring device, KCBD reported.

The robot approached the broken window, and the suspect shot his gun at the robot. The bomb squad robot countered by spraying tear gas into the room.

Around 10 a.m. Wednesday, Texas Anti-Gang unit members tracked down Delarosa at a Days Inn hotel in Lubbock. Delarosa — who was armed at the time — reportedly fired a shot at officers from inside his room when they went to approach him.

The officers called the Lubbock County Sheriff’s SWAT team to assist with apprehending the suspect.

Officials said Delarosa fired another shot while SWAT negotiators attempted to convince him to peacefully surrender. During the negotiations, Delarosa — who was barricaded in his hotel room — allegedly fired more shots at officers.

A sheriff’s office sniper returned fire and allegedly struck Delarosa.

By this time in the standoff, the room's large glass window had been shattered amid the exchange of gunfire.

Robot to the rescue

The Lubbock Regional Bomb Squad deployed a robot to deal with the suspect without putting the lives of law enforcement in jeopardy. The bomb squad robot rolled up to Delarosa's hotel room. The suspect first attempted to debilitate the robot by throwing a bed sheet on it, which was not effective.

The robot approached the broken window, and the suspect shot his gun at the robot. The bomb squad robot countered by spraying tear gas into the room.

The suspect is seen on video desperately crawling out of the room and appears to be extremely disoriented from the tear gas.

While Delarosa was wriggling on the ground, the robot drove on top of him.

Then, while pinning him to the ground, the wheels of the robot pulled down the suspect's pants.

SWAT team members swooped in to take Delarosa into custody two hours after the showdown began.

Delarosa was transported to University Medical Center for his injuries and then booked into the Lubbock County Detention Center.

Delarosa was charged with aggravated assault against a public servant.

The Texas Department of Criminal Justice noted that Delarosa was sentenced to 20 years in prison for manufacturing and delivering a controlled substance in 2017.

Delarosa was released from prison and placed on parole in April 2022.

Like Blaze News? Circumvent the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Blaze News investigates: Brain downloads, self-driving cars, and autonomous beings: Is the media lying about the dangers of AI?



Industry experts and entrepreneurs alike think that artificial intelligence needs to be harnessed before it's too late.

Of course, any push toward widening the use of technology that would benefit a person or particular company's bottom line would likely be welcomed. At the same time, however, there is an overarching narrative that AI is on the brink of becoming incredibly dangerous.

The theme has existed for some time, though. It was all the way back in 1991 when "Terminator 2: Judgment Day" taught humanity that entrusting our weapons systems to AI would be a big mistake. Other films like 2008's "Eagle Eye" showed a central AI could track people anywhere they went and ruin their life, controlling systems of society at its whim.

Fast forward to the present day, and it has seemed like every person in the know has warned about the dangers that artificial intelligence can bring.

Microsoft said in February 2024 that American enemies are preparing for AI-driven cyber attacks. Multiple former Google employees claimed AI at the company had become sentient and learned to feel, comparing it to "creating God."

Elon Musk even said AI is a "threat to humanity."

What do all of these sources have in common? Each is the owner or developer of his own artificial intelligence platform. Just days after his comments, Musk announced Grok, the AI technology that is integrated into his X platform.

Microsoft has invested billions in OpenAI, while Google has its aptly named Google AI under its belt. This begs the question as to whether these corporate talking points are simply acts of deceptive marketing and misdirection or if the experts in the field have a true worry about the path unfettered AI can go down.

'AI is like water in the sea, you can not like it, but if it goes against you, you will drown.'

Blaze News spoke with industry experts and AI entrepreneurs to find out whether or not the general consumer should be concerned with the direction companies are taking with their automated services.

Most didn't buy into the idea of an immediate threat stemming from artificial sources, offering stark differences in their answers compared to that of the big players. But what was stressed was the need for Western nations to harness and monetize AI before adversarial economies do it first.

"There are significant long-term dangers, but the risks of not utilizing AI now potentially exacerbates those long-term risks," said Christopher Fakouri, who represented the Czech Republic on the matter.

"If we don't utilize and develop now, we will lose out in the long term to other markets and other people ... a lot of countries and jurisdictions across the world are looking for market capture [with AI]; however, I would not underestimate those risks."

Whether this proposed arms race was strictly economic or also militaristic was not clear.

"[AI] functions on the human layer and helps augment excellence; the earlier we're ready to grasp the tools of augmented reality the earlier we can use these tools to benefit us," said Dr. Adejobi Adeloye from Amba Transfer, a company that uses AI technology to help seniors acquire medication.

"It is the future of the economy. Right now we are looking towards the era of artificial intelligence, of augmented reality, and virtual reality, and infusing it into education, manufacturing, and mining," the doctor added.

Return's Peter Gietl sees AI disrupting the marketplace in the near future but not in a doomsday sense that many are speculating.

"This means SEO, paralegal jobs ... but overall I don't see it as overwhelmingly replacing a mass amount of the job market," he said.

What is AI currently capable of?

Nuclear launches at the hands of AI wouldn't be completely out of the question based on the current rhetoric around the topic. But behind closed doors, the technology may not be nearly as far along as the public thinks.

Multiple representatives from IBM revealed that the technology isn't exactly ready for world domination. One spokesperson revealed that the company isn't necessarily interested in selling products that use AI and is currently focused on harnessing the technology for use in sports. IBM has partnered with both Wimbledon and the Masters, sharing its technology to track data to increase the user experience.

Fans can have AI detail up-to-date action from the events and even have it read to them as if it were play-by-play announcing.

"We're not hiding it or trying to make it seem like it's a real person," one representative who wanted to remain anonymous said. "We have voice actors who lend their voices to the technology." The spokesperson added that the most popular voice for golf has been a generic male from the American South.

That technology is called IBM watsonx.

The scary rhetoric isn't close to where AI technology currently is either, the representative explained.

"It's nonsense," the IBM employee said. "An AI model was able to correctly identify four colors recently, and that was considered a huge breakthrough."

While it is possible that the information was carefully crafted between the people at IBM with the intention to mislead, the representative could also be simply telling it like it is.

Gietl agreed, explaining that AI in its current state is still producing grave errors.

"There's a term called 'AI Hallucination.' AI will make things up that it thinks the user wants to hear. All of the programs are being trained and taught on human knowledge that exists online, which of course includes a mass amount of non sequiturs and misinformation."

"A lot of rhetoric is scare tactic propaganda put out by major companies to scare everyone into thinking AI is much more advanced than it is at the moment, and presents existential danger to the economy and national defense," Gietl continued. "By doing that they can scare people into accepting regulatory capture — these companies want to capture the market and regulate it."

'Eventually we will become a society of empowered, independent, AIs.'

The other side of the coin is indeed bleak and does include the aforementioned spooky rhetoric.

Dr. Adeloye likened those who may be looking at job loss as needing to take note of when "the cheese" has moved.

"Certain things are inevitable if you're not ready to understand that the cheese has moved, and you need to move and find new cheese. The handwriting is on the wall ... your professional job may be on the line."

Rat maze comparisons pale to what Olga Grass explained, a representative from forward-thinking company AISynt.

The company, which Grass said was based on the research of a "scientist who formerly worked for the Soviet Union," is working in the direction of developing autonomous AI beings.

"We don't have the real AI just yet. Real intelligence is not computational, it's not algorithm-based," Grass said. "The real AI is a digital nervous system that learns for itself, and has the ability to build from the environment."

The representative went on to liken the company's technology to raising a child or training a dog — learning from its environment. AISynt can certainly be described as ambitious but also frightening.

While Grass sold the technology as a personal AI system that "empowers" and protects from other AI systems, the company's website is much more Matrix-esque.

The technology promises brain downloads, instant learning, and living/learning beings.

"Living, digital, evolving forms of any nervous system," Grass said. She then claimed that the technology was already in use with a "neural matrix" in the form of an autonomous drone that thinks and learns for itself.

Imminent job loss

AI is a field that is on fire, and, as such, the term is being used colloquially as a buzz word to sell almost anything. Blaze News was able to chat with representatives from customer service, job-posting software, social media aggregation, and everywhere else in between. Each genre promised unique and first-to-market opportunities with AI.

Companies are using the verbiage to "race for venture capital money and startup funds," Gietl explained. "Even the kooks and crazy people."

Oleh Redko, CEO at Business!Go, said governments need to make a strategy sooner rather than later to prevent massive job losses.

"AI is like water in the sea, you can not like it, but if it goes against you, you will drown. Many people are against AI and some people are for AI, but we need to accept it and manage it and try to make it safe."

The entrepreneur stressed that governments don't have the right to come after companies after the fact with taxation and regulation simply because they didn't have the foresight to prepare for the technological advancements. He predicted job market changes are five to 10 years away.

On the other hand, AISynt has an outlook that is completely different from all the other representatives in the AI marketplace:

"Eventually we will become a society of empowered, independent, AIs."

Move over, Bruce Willis.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

More people want romance with robots and cartoons. Is this really our future?



The specter of replacing humans by their creations has long haunted the collective psyche. We have always feared human obsolescence, from assembly lines to personal computers to, now, artificial intelligence.

The discussion often centers on labor — machines taking our jobs. The factory-worker replaced by an assembly line; the software engineer replaced by artificial intelligence.

There’s also a longstanding fascination, if not outright fear, that something similar could happen in romance, from the ancient Greek myth of Pygmalion and his ivory statue brought to life to early science fiction works featuring artificial women, like Auguste Villiers de l'Isle-Adam's novel "The Future Eve" (1886). Recent works like the 2007 film "Lars and the Real Girl" or "Her" (2013) have taken our deep-seated dread into the 21st century, but we still seem to come back to the idea that while, yes, maybe machines can change how we work, they won’t be able to change how we love: Artificial companions cannot truly replace genuine human connection, no matter how lifelike or personalized. Love is what makes us human.

I suspect it’s likely that AI boyfriends will present a more complex challenge than AI girlfriends.

As technology has advanced, the idea of artificial companions has shifted away from the realm of film and literature to sex dolls and, more recently, AI-powered virtual partners, with disproportionate attention placed on AI girlfriends. Men are addicted to porn, this is the next iteration — right?

A few months ago, an article about the supposed rise of AI girlfriends went viral on X.

The crux of the piece was: "If women become infinitely personalizable (and probably beautiful), how will real-life women compete?" Most people in my corner of social media were skeptical, arguing that what makes romance romantic isn't perfection or customization. Even with OnlyFans models, there’s some promise — no matter how small — of connecting with a real person.

And while it’s true that some people indeed enjoy erotic roleplaying with AI, it’s rarely to the exclusion of a human girlfriend or boyfriend. If there is no human in the picture, it’s likely because they cannot find one, not unwilling to. What’s more, this may be true even if they do have a specific fetish for AI or robots.

What came first, the gooner (internet-speak for compulsive masturbator) or the microwavable meal for one? To some critics, the answer is the former: These technologies aren’t a symptom of isolation but are the cause of isolation. While that’s tempting — blame the porn, blame the robots — everything we know, from the eccentric to the mundane, suggests differently.

I'm reminded of a 2007 article by MIT professor and sociologist Sherry Turkle, “Authenticity in the Age of Digital Companions.” Then, now, and significantly before 2007, machines existed in this liminal space of both inauthentic and alive.

Children, for example, perceive machines as emotional, sometimes "living" beings. We also have emotional responses to what Turkle calls "relational artifacts," such as objects like Furbies, Tamagotchi pets, and these days, chatGPT (ever apologized or said please after a request?). Turkle wrote that we can form emotional relationships with them, but they aren't comparable to our relationships with other people. Turkle ends the piece with an anecdote about a friend who is severely disabled, one that I think is still relevant.

"Show me a person in my shoes who is looking for a robot, and I'll show you someone who is looking for a person and can't find one," he tells her.

According to Turkle:

[Richard] turned the conversation to human cruelty: "Some of the aides and nurses at the rehab center hurt you because they are unskilled and some hurt you because they mean to. I had both. One of them, she pulled me by the hair. One dragged me by my tubes. A robot would never do that," he said. "But you know in the end, that person who dragged me by my tubes had a story. I could find out about it."

For Richard, being with a person, even an unpleasant, sadistic person, made him feel that he was still alive. It signified that his way of being in the world still had a certain dignity, for him the same as authenticity, even if the scope and scale of his activities were radically reduced. This helped sustain him. Although he would not have wanted his life endangered, he preferred the sadist to the robot.

Richard's perspective on living is a cautionary word to those who would speak too quickly or simply of purely technical benchmarks for our interactions. What is the value of interactions that contain no understanding of us and that contribute nothing to a shared store of human meaning? These are not questions with easy answers, but questions worth asking and returning to.

The counterargument concerns whether that lack of authenticity arises because we know machines are not human or the technology isn't there yet.

I tend toward the former. Even in the “Love Revolution” manifesto of the “fictosexual” writer Honda Toru (that’s someone who knowingly seeks romantic relationships with fictional characters, as opposed to real people), there are the echoes of "I am like this because I have to be" as opposed to "I am like this because I was born this way":

"Some of us find satisfaction with fictional characters. It's not for everyone, but maybe more people would recognize this life choice if it wasn't always belittled. Forcing people to live up to impossible ideals so they can participate in so-called reality creates so-called losers, who in their despair might lash out."

Reading Toru's writing about “love capitalism,” a term he uses to describe the transactional nature of romance in Japan, it seems like he wouldn't have chosen a “waifu,” or anime wife, if he felt more accepted by society.

Talking to Cait Calder, another fictosexual, I got a similar impression.

Neither Cait nor Toru argue that their attraction to and love of fictional characters aren't real — they describe the experience as weird, wonderful, and authentic — and both want acceptance for who they are. But there is also an acknowledgment that this orientation doesn't emerge in a vacuum, whether they say so explicitly, like Toru does, or implicitly like Cait did when she spoke about her autism diagnosis.

I wonder if part of the quest for people to stop invalidating these relationships is partially the argument that they're not maladaptive; they're perfectly rational in our mediated and sometimes very alienating world as it is.

Gender dynamics also complicate this conversation, with women overwhelmingly being framed as the losers as men chose simulated women over real ones. That’s intuitive, but I think it's incorrect. I suspect it’s likely that AI boyfriends will present a more complex challenge than AI girlfriends.

My prediction is that AI boyfriends will trend in four core manifestations:

  1. For a minority, like fictosexuals or those who are deeply committed to a fandom, AI companions will substitute for physical world romantic partners. However, even within this community, many report not being able to fully suspend disbelief, finding AI interactions fun but less satisfying than daydreaming or writing fan fiction.
  2. AI will be a form of play, similar to The Sims, playing with dolls, or role-playing. While potentially addictive, it won't be a 1:1 substitution for human interaction.
  3. They will be a form of erotica, similar to romance novels, with some users preferring to "play a character" within the AI chat narrative universe. They may become popular in fandom communities.
  4. They'll be deployed in romance scams against the naive and gullible, like those who believe celebrities are directly messaging them on Instagram.

Among these manifestations, the third one seems most likely to gain traction. This is because there is already a well-established precedent for women forming emotional attachments to fictional characters and celebrities and engaging in fantasy relationships through various media, including romance novels and fan fiction. AI boyfriends could serve as an interactive, personalized extension of these existing tendencies, allowing women to engage in immersive, emotionally satisfying experiences tailored to their desires and needs.

That being said, any AI companion's threat to real-life relationships is likely overstated.

Text-based roleplaying and dating simulation games have been around for years. While they can provide a sense of connection and fulfillment, they have not replaced the desire for human companionship. They're proxies for it. That's what all of this stuff is — a proxy. No teenage girl, since time immemorial, has preferred a Sherlock Holmes, an Edward Cullen, or a boyband member to a real-life boyfriend.

The same is broadly true in reverse until AI can power sex dolls. Unfortunately, the jury's not out on sex robots that can strongly mimic a human woman. As it stands, though, chatGPT, Replika, character.ai, and Digi are not substitutes for girlfriends among men who feel confident in their ability to find a girlfriend. When this type of media becomes an obsession, it betrays a lack in one's life. If they inculcate people with unrealistic expectations, then those are people who've had very few opportunities to have their expectations lowered.

Ultimately, I don't believe AI companions will become widespread, sustainable substitutions for physical-world partners or replace dead loved ones, as in the film "Marjorie Prime." The uncanny valley (the unsettling feeling when AI or robots closely resemble humans but are not quite convincingly realistic) will likely limit their appeal. Ultimately, people crave genuine human connections, and while AI companions may offer a temporary salve for loneliness, they cannot replace the depth and authenticity of another person.

I do see a halfway point becoming more common in the future, and indeed, this might be the situation we’re living in now.

A surge in internet, but not dating-app-native, relationships and prolonged pre-dating communication squares better with what we know about younger generations. As dating apps lose favor while online socialization continues, meeting potential partners and friends online is becoming more common and accepted. People aren't ashamed they have "internet friends" anymore, and it seems like every app except dating apps are used for dating.

People still crave uniquely human connections, but in an increasingly isolated world, the compromise is human-machine-human interaction, not human-machine. While these technologies can provide comfort and companionship for some, they cannot substitute the richness and authenticity of face-to-face human interactions.