Is ‘The Wild Robot’ A Wholesome Family Film Or Transhumanist Propaganda?

Parents should talk to their children about what makes humans unique and beautiful and warn them to be wary of anyone seeking to demote humanity from being the pinnacle of creation.

High-tech hero: Video shows police bomb squad robot outsmart, pin down hotel gunman in Texas showdown



There's a new RoboCop in town. A police bomb squad robot singlehandedly incapacitated and pinned down an armed suspect in a Texas showdown.

The Texas Department of Criminal Justice said there was a warrant out for the arrest of 39-year-old Felix Delarosa because he violated his parole by tampering with his electronic monitoring device, KCBD reported.

The robot approached the broken window, and the suspect shot his gun at the robot. The bomb squad robot countered by spraying tear gas into the room.

Around 10 a.m. Wednesday, Texas Anti-Gang unit members tracked down Delarosa at a Days Inn hotel in Lubbock. Delarosa — who was armed at the time — reportedly fired a shot at officers from inside his room when they went to approach him.

The officers called the Lubbock County Sheriff’s SWAT team to assist with apprehending the suspect.

Officials said Delarosa fired another shot while SWAT negotiators attempted to convince him to peacefully surrender. During the negotiations, Delarosa — who was barricaded in his hotel room — allegedly fired more shots at officers.

A sheriff’s office sniper returned fire and allegedly struck Delarosa.

By this time in the standoff, the room's large glass window had been shattered amid the exchange of gunfire.

Robot to the rescue

The Lubbock Regional Bomb Squad deployed a robot to deal with the suspect without putting the lives of law enforcement in jeopardy. The bomb squad robot rolled up to Delarosa's hotel room. The suspect first attempted to debilitate the robot by throwing a bed sheet on it, which was not effective.

The robot approached the broken window, and the suspect shot his gun at the robot. The bomb squad robot countered by spraying tear gas into the room.

The suspect is seen on video desperately crawling out of the room and appears to be extremely disoriented from the tear gas.

While Delarosa was wriggling on the ground, the robot drove on top of him.

Then, while pinning him to the ground, the wheels of the robot pulled down the suspect's pants.

SWAT team members swooped in to take Delarosa into custody two hours after the showdown began.

Delarosa was transported to University Medical Center for his injuries and then booked into the Lubbock County Detention Center.

Delarosa was charged with aggravated assault against a public servant.

The Texas Department of Criminal Justice noted that Delarosa was sentenced to 20 years in prison for manufacturing and delivering a controlled substance in 2017.

Delarosa was released from prison and placed on parole in April 2022.

Like Blaze News? Circumvent the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Blaze News investigates: Brain downloads, self-driving cars, and autonomous beings: Is the media lying about the dangers of AI?



Industry experts and entrepreneurs alike think that artificial intelligence needs to be harnessed before it's too late.

Of course, any push toward widening the use of technology that would benefit a person or particular company's bottom line would likely be welcomed. At the same time, however, there is an overarching narrative that AI is on the brink of becoming incredibly dangerous.

The theme has existed for some time, though. It was all the way back in 1991 when "Terminator 2: Judgment Day" taught humanity that entrusting our weapons systems to AI would be a big mistake. Other films like 2008's "Eagle Eye" showed a central AI could track people anywhere they went and ruin their life, controlling systems of society at its whim.

Fast forward to the present day, and it has seemed like every person in the know has warned about the dangers that artificial intelligence can bring.

Microsoft said in February 2024 that American enemies are preparing for AI-driven cyber attacks. Multiple former Google employees claimed AI at the company had become sentient and learned to feel, comparing it to "creating God."

Elon Musk even said AI is a "threat to humanity."

What do all of these sources have in common? Each is the owner or developer of his own artificial intelligence platform. Just days after his comments, Musk announced Grok, the AI technology that is integrated into his X platform.

Microsoft has invested billions in OpenAI, while Google has its aptly named Google AI under its belt. This begs the question as to whether these corporate talking points are simply acts of deceptive marketing and misdirection or if the experts in the field have a true worry about the path unfettered AI can go down.

'AI is like water in the sea, you can not like it, but if it goes against you, you will drown.'

Blaze News spoke with industry experts and AI entrepreneurs to find out whether or not the general consumer should be concerned with the direction companies are taking with their automated services.

Most didn't buy into the idea of an immediate threat stemming from artificial sources, offering stark differences in their answers compared to that of the big players. But what was stressed was the need for Western nations to harness and monetize AI before adversarial economies do it first.

"There are significant long-term dangers, but the risks of not utilizing AI now potentially exacerbates those long-term risks," said Christopher Fakouri, who represented the Czech Republic on the matter.

"If we don't utilize and develop now, we will lose out in the long term to other markets and other people ... a lot of countries and jurisdictions across the world are looking for market capture [with AI]; however, I would not underestimate those risks."

Whether this proposed arms race was strictly economic or also militaristic was not clear.

"[AI] functions on the human layer and helps augment excellence; the earlier we're ready to grasp the tools of augmented reality the earlier we can use these tools to benefit us," said Dr. Adejobi Adeloye from Amba Transfer, a company that uses AI technology to help seniors acquire medication.

"It is the future of the economy. Right now we are looking towards the era of artificial intelligence, of augmented reality, and virtual reality, and infusing it into education, manufacturing, and mining," the doctor added.

Return's Peter Gietl sees AI disrupting the marketplace in the near future but not in a doomsday sense that many are speculating.

"This means SEO, paralegal jobs ... but overall I don't see it as overwhelmingly replacing a mass amount of the job market," he said.

What is AI currently capable of?

Nuclear launches at the hands of AI wouldn't be completely out of the question based on the current rhetoric around the topic. But behind closed doors, the technology may not be nearly as far along as the public thinks.

Multiple representatives from IBM revealed that the technology isn't exactly ready for world domination. One spokesperson revealed that the company isn't necessarily interested in selling products that use AI and is currently focused on harnessing the technology for use in sports. IBM has partnered with both Wimbledon and the Masters, sharing its technology to track data to increase the user experience.

Fans can have AI detail up-to-date action from the events and even have it read to them as if it were play-by-play announcing.

"We're not hiding it or trying to make it seem like it's a real person," one representative who wanted to remain anonymous said. "We have voice actors who lend their voices to the technology." The spokesperson added that the most popular voice for golf has been a generic male from the American South.

That technology is called IBM watsonx.

The scary rhetoric isn't close to where AI technology currently is either, the representative explained.

"It's nonsense," the IBM employee said. "An AI model was able to correctly identify four colors recently, and that was considered a huge breakthrough."

While it is possible that the information was carefully crafted between the people at IBM with the intention to mislead, the representative could also be simply telling it like it is.

Gietl agreed, explaining that AI in its current state is still producing grave errors.

"There's a term called 'AI Hallucination.' AI will make things up that it thinks the user wants to hear. All of the programs are being trained and taught on human knowledge that exists online, which of course includes a mass amount of non sequiturs and misinformation."

"A lot of rhetoric is scare tactic propaganda put out by major companies to scare everyone into thinking AI is much more advanced than it is at the moment, and presents existential danger to the economy and national defense," Gietl continued. "By doing that they can scare people into accepting regulatory capture — these companies want to capture the market and regulate it."

'Eventually we will become a society of empowered, independent, AIs.'

The other side of the coin is indeed bleak and does include the aforementioned spooky rhetoric.

Dr. Adeloye likened those who may be looking at job loss as needing to take note of when "the cheese" has moved.

"Certain things are inevitable if you're not ready to understand that the cheese has moved, and you need to move and find new cheese. The handwriting is on the wall ... your professional job may be on the line."

Rat maze comparisons pale to what Olga Grass explained, a representative from forward-thinking company AISynt.

The company, which Grass said was based on the research of a "scientist who formerly worked for the Soviet Union," is working in the direction of developing autonomous AI beings.

"We don't have the real AI just yet. Real intelligence is not computational, it's not algorithm-based," Grass said. "The real AI is a digital nervous system that learns for itself, and has the ability to build from the environment."

The representative went on to liken the company's technology to raising a child or training a dog — learning from its environment. AISynt can certainly be described as ambitious but also frightening.

While Grass sold the technology as a personal AI system that "empowers" and protects from other AI systems, the company's website is much more Matrix-esque.

The technology promises brain downloads, instant learning, and living/learning beings.

"Living, digital, evolving forms of any nervous system," Grass said. She then claimed that the technology was already in use with a "neural matrix" in the form of an autonomous drone that thinks and learns for itself.

Imminent job loss

AI is a field that is on fire, and, as such, the term is being used colloquially as a buzz word to sell almost anything. Blaze News was able to chat with representatives from customer service, job-posting software, social media aggregation, and everywhere else in between. Each genre promised unique and first-to-market opportunities with AI.

Companies are using the verbiage to "race for venture capital money and startup funds," Gietl explained. "Even the kooks and crazy people."

Oleh Redko, CEO at Business!Go, said governments need to make a strategy sooner rather than later to prevent massive job losses.

"AI is like water in the sea, you can not like it, but if it goes against you, you will drown. Many people are against AI and some people are for AI, but we need to accept it and manage it and try to make it safe."

The entrepreneur stressed that governments don't have the right to come after companies after the fact with taxation and regulation simply because they didn't have the foresight to prepare for the technological advancements. He predicted job market changes are five to 10 years away.

On the other hand, AISynt has an outlook that is completely different from all the other representatives in the AI marketplace:

"Eventually we will become a society of empowered, independent, AIs."

Move over, Bruce Willis.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

More people want romance with robots and cartoons. Is this really our future?



The specter of replacing humans by their creations has long haunted the collective psyche. We have always feared human obsolescence, from assembly lines to personal computers to, now, artificial intelligence.

The discussion often centers on labor — machines taking our jobs. The factory-worker replaced by an assembly line; the software engineer replaced by artificial intelligence.

There’s also a longstanding fascination, if not outright fear, that something similar could happen in romance, from the ancient Greek myth of Pygmalion and his ivory statue brought to life to early science fiction works featuring artificial women, like Auguste Villiers de l'Isle-Adam's novel "The Future Eve" (1886). Recent works like the 2007 film "Lars and the Real Girl" or "Her" (2013) have taken our deep-seated dread into the 21st century, but we still seem to come back to the idea that while, yes, maybe machines can change how we work, they won’t be able to change how we love: Artificial companions cannot truly replace genuine human connection, no matter how lifelike or personalized. Love is what makes us human.

I suspect it’s likely that AI boyfriends will present a more complex challenge than AI girlfriends.

As technology has advanced, the idea of artificial companions has shifted away from the realm of film and literature to sex dolls and, more recently, AI-powered virtual partners, with disproportionate attention placed on AI girlfriends. Men are addicted to porn, this is the next iteration — right?

A few months ago, an article about the supposed rise of AI girlfriends went viral on X.

The crux of the piece was: "If women become infinitely personalizable (and probably beautiful), how will real-life women compete?" Most people in my corner of social media were skeptical, arguing that what makes romance romantic isn't perfection or customization. Even with OnlyFans models, there’s some promise — no matter how small — of connecting with a real person.

And while it’s true that some people indeed enjoy erotic roleplaying with AI, it’s rarely to the exclusion of a human girlfriend or boyfriend. If there is no human in the picture, it’s likely because they cannot find one, not unwilling to. What’s more, this may be true even if they do have a specific fetish for AI or robots.

What came first, the gooner (internet-speak for compulsive masturbator) or the microwavable meal for one? To some critics, the answer is the former: These technologies aren’t a symptom of isolation but are the cause of isolation. While that’s tempting — blame the porn, blame the robots — everything we know, from the eccentric to the mundane, suggests differently.

I'm reminded of a 2007 article by MIT professor and sociologist Sherry Turkle, “Authenticity in the Age of Digital Companions.” Then, now, and significantly before 2007, machines existed in this liminal space of both inauthentic and alive.

Children, for example, perceive machines as emotional, sometimes "living" beings. We also have emotional responses to what Turkle calls "relational artifacts," such as objects like Furbies, Tamagotchi pets, and these days, chatGPT (ever apologized or said please after a request?). Turkle wrote that we can form emotional relationships with them, but they aren't comparable to our relationships with other people. Turkle ends the piece with an anecdote about a friend who is severely disabled, one that I think is still relevant.

"Show me a person in my shoes who is looking for a robot, and I'll show you someone who is looking for a person and can't find one," he tells her.

According to Turkle:

[Richard] turned the conversation to human cruelty: "Some of the aides and nurses at the rehab center hurt you because they are unskilled and some hurt you because they mean to. I had both. One of them, she pulled me by the hair. One dragged me by my tubes. A robot would never do that," he said. "But you know in the end, that person who dragged me by my tubes had a story. I could find out about it."

For Richard, being with a person, even an unpleasant, sadistic person, made him feel that he was still alive. It signified that his way of being in the world still had a certain dignity, for him the same as authenticity, even if the scope and scale of his activities were radically reduced. This helped sustain him. Although he would not have wanted his life endangered, he preferred the sadist to the robot.

Richard's perspective on living is a cautionary word to those who would speak too quickly or simply of purely technical benchmarks for our interactions. What is the value of interactions that contain no understanding of us and that contribute nothing to a shared store of human meaning? These are not questions with easy answers, but questions worth asking and returning to.

The counterargument concerns whether that lack of authenticity arises because we know machines are not human or the technology isn't there yet.

I tend toward the former. Even in the “Love Revolution” manifesto of the “fictosexual” writer Honda Toru (that’s someone who knowingly seeks romantic relationships with fictional characters, as opposed to real people), there are the echoes of "I am like this because I have to be" as opposed to "I am like this because I was born this way":

"Some of us find satisfaction with fictional characters. It's not for everyone, but maybe more people would recognize this life choice if it wasn't always belittled. Forcing people to live up to impossible ideals so they can participate in so-called reality creates so-called losers, who in their despair might lash out."

Reading Toru's writing about “love capitalism,” a term he uses to describe the transactional nature of romance in Japan, it seems like he wouldn't have chosen a “waifu,” or anime wife, if he felt more accepted by society.

Talking to Cait Calder, another fictosexual, I got a similar impression.

Neither Cait nor Toru argue that their attraction to and love of fictional characters aren't real — they describe the experience as weird, wonderful, and authentic — and both want acceptance for who they are. But there is also an acknowledgment that this orientation doesn't emerge in a vacuum, whether they say so explicitly, like Toru does, or implicitly like Cait did when she spoke about her autism diagnosis.

I wonder if part of the quest for people to stop invalidating these relationships is partially the argument that they're not maladaptive; they're perfectly rational in our mediated and sometimes very alienating world as it is.

Gender dynamics also complicate this conversation, with women overwhelmingly being framed as the losers as men chose simulated women over real ones. That’s intuitive, but I think it's incorrect. I suspect it’s likely that AI boyfriends will present a more complex challenge than AI girlfriends.

My prediction is that AI boyfriends will trend in four core manifestations:

  1. For a minority, like fictosexuals or those who are deeply committed to a fandom, AI companions will substitute for physical world romantic partners. However, even within this community, many report not being able to fully suspend disbelief, finding AI interactions fun but less satisfying than daydreaming or writing fan fiction.
  2. AI will be a form of play, similar to The Sims, playing with dolls, or role-playing. While potentially addictive, it won't be a 1:1 substitution for human interaction.
  3. They will be a form of erotica, similar to romance novels, with some users preferring to "play a character" within the AI chat narrative universe. They may become popular in fandom communities.
  4. They'll be deployed in romance scams against the naive and gullible, like those who believe celebrities are directly messaging them on Instagram.

Among these manifestations, the third one seems most likely to gain traction. This is because there is already a well-established precedent for women forming emotional attachments to fictional characters and celebrities and engaging in fantasy relationships through various media, including romance novels and fan fiction. AI boyfriends could serve as an interactive, personalized extension of these existing tendencies, allowing women to engage in immersive, emotionally satisfying experiences tailored to their desires and needs.

That being said, any AI companion's threat to real-life relationships is likely overstated.

Text-based roleplaying and dating simulation games have been around for years. While they can provide a sense of connection and fulfillment, they have not replaced the desire for human companionship. They're proxies for it. That's what all of this stuff is — a proxy. No teenage girl, since time immemorial, has preferred a Sherlock Holmes, an Edward Cullen, or a boyband member to a real-life boyfriend.

The same is broadly true in reverse until AI can power sex dolls. Unfortunately, the jury's not out on sex robots that can strongly mimic a human woman. As it stands, though, chatGPT, Replika, character.ai, and Digi are not substitutes for girlfriends among men who feel confident in their ability to find a girlfriend. When this type of media becomes an obsession, it betrays a lack in one's life. If they inculcate people with unrealistic expectations, then those are people who've had very few opportunities to have their expectations lowered.

Ultimately, I don't believe AI companions will become widespread, sustainable substitutions for physical-world partners or replace dead loved ones, as in the film "Marjorie Prime." The uncanny valley (the unsettling feeling when AI or robots closely resemble humans but are not quite convincingly realistic) will likely limit their appeal. Ultimately, people crave genuine human connections, and while AI companions may offer a temporary salve for loneliness, they cannot replace the depth and authenticity of another person.

I do see a halfway point becoming more common in the future, and indeed, this might be the situation we’re living in now.

A surge in internet, but not dating-app-native, relationships and prolonged pre-dating communication squares better with what we know about younger generations. As dating apps lose favor while online socialization continues, meeting potential partners and friends online is becoming more common and accepted. People aren't ashamed they have "internet friends" anymore, and it seems like every app except dating apps are used for dating.

People still crave uniquely human connections, but in an increasingly isolated world, the compromise is human-machine-human interaction, not human-machine. While these technologies can provide comfort and companionship for some, they cannot substitute the richness and authenticity of face-to-face human interactions.

Meet the self-driving car ending road rage (and ethics)



“Watching full self driving cut the entire line to make a left proves that AGI is here.” That’s the headline user @0xgaut posted to X to come to grips with a video from @AIDRIVR showing Tesla’s latest model creep up into an open spot in a backed-up left-hand turn lane, just like a human driver who’d draw howls, honks, and possibly physical hostility for trying the same.

Does this count as “artificial general intelligence”? Hardly. It’s simply the logical move given the destination and the problem set posed by the array of cars and streets involved. But, while some might be moved to attack self-driving cars making road-rage-inducing moves on the road, the futility (and lawsuits) involved in that kind of hostile reaction point to the way that computerization far short of fabled AGI can and will still have incredibly sweeping effects.

Perhaps if wayward Westerners hadn’t been so beguiled by the promise of secular ethics, we wouldn’t be in this mess today.

In this telling case, the consequences would run like this: Self-driving cars connected into a large-scale network will optimize for driving paths that best balance and harmonize individual and aggregate routes. If your car should happen to take a route that strikes you as not entirely optimal from your own personal standpoint, what can you do about it, and what type of way is it worth feeling about it?

Instead of feeling aggrieved, outraged, or victimized by some jerk you must somehow get revenge on, if only with a shake of the fist, you’ll probably just sigh a little and accept the situation, even if deep down you’re hit with a passing sensation of wishing the whole automated system disappeared and we went back to the days of horses and buggies.

What this augurs — wherever we find such scaled-up networks of automated automobiles (lol) — is not just an end to road rage as we knew it. It’s an end to the idea that basic public order is rooted in a shared experience of justice that depends on people personally living out ethical behavior.

At first blush, this looks like an attack on some of the most familiar foundations of what we like to think of as Western civilization. But, interestingly, right now the dominant Western vision of the proper relationship between society, justice, and ethics is wokeness. Yes, wokes say! Basic public order depends on a collective social justice experience in which everyone is expected to ground their every choice and behavior in ethical principles, such as diversity, equity, inclusion, belonging, etc.!

It’s enough to make a person question just what it is we mean, or thought we meant, by Western civilization. If networks of automated automobiles threaten the familiar contours of principled public philosophy in the West, they must pose an even greater threat to the woke notion of principled public philosophy that has emerged from Western thought to seize power ... right?

Things get even more curious when you consider that Western civ got into this predicament by trying to find ecumenically nonreligious ways of optimizing for public order, but that woke civ has learned from the failure of that project by injecting back into it a new kind of religion, one that worships justice itself — and turns to technology in the hopes of perfecting the execution of justice on earth via woke programming of planetary supercomputers. There’s not much use for ethics in a society where justice-worshippers use omnipresent AI trained to micro-adjust everyone’s lived experiences in real-time, microaggression by microaggression, rewarding and punishing trillions of times a second with nanoscopic perfection.

Such a world holds out the promise of transcending not just ethics but Christianity and all its spiritual practices, from discipline and discernment to repentance and forgiveness. Perhaps if wayward Westerners hadn’t been so beguiled by the promise of secular ethics, we wouldn’t be in this mess today. And, perhaps, at least some Westerners — of a tomorrow coming sooner than we might think — will reason that they don’t need to wait for the coming of the Tesla hive mind to switch out their complex intellectual ethics for the simple commandments of Christ.

Consumers can't tell the difference between human-made and AI-generated videos, study suggests

Consumers can't tell the difference between human-made and AI-generated videos, study suggests



A survey of U.S. consumers indicated that a strong majority of Americans would be comfortable supporting government regulation that required labeling on artificial intelligence-generated content. The same consumers surveyed had difficulty discerning AI-generated video from human-made video content.

Americans responded to a HarrisX survey asking them if they wanted "U.S. Lawmakers to Require Labeling on AI-Generated Content," with most responding that they would support such an endeavor.

Consumers were asked about fully AI-created videos, photos, writings, music, captions, sounds, and more, Variety reported.

The strongest response came in regard to labeling AI videos and photos, at 74% and 72%, respectively. Only 61% of respondents supported labeling AI-generated sounds and captions; representing the lowest amount of support on the survey.

Even though the majority of consumers supported forced-labeling on every media type they were asked about, a more concerning result came out of an attached task for each respondent: determining whether a video was real or AI-generated.

The video-based survey was conducted using OpenAI's Sora; a text-to-video AI generator.

Participants were shown a total of eight videos, four which were AI-generated and another four that were human-made videos. A majority of viewers correctly guessed the origins of a video just once for the artificial-intelligence generations and once for the human-made videos.

While an AI video of a close-up on a person's eye had 50% of respondents declare it wasn't authentic, AI-content that showed panning footage of a town was correctly deemed fake by 56% of viewers.

A human-made video of a city was the only footage correctly labeled as created by a humans, with 57% of viewers saying of it was real.


The same survey respondents made it clear that they would also support government regulation in terms of protecting certain job sectors from the impact of artificial intelligence.

In total, 76% said that they would support the government implementing "strong regulations to protect jobs Sora and AI could impact." Just 24% said that "strong regulations will stifle innovation and prevent more jobs from being created by the new technology."

The survey of 1,082 U.S. adults was consistent across all demographics with those between ages 50-64 most likely to support the regulations and ages 35-49 least likely to support, at 81% and 71%, respectively.

Women were more likely to support the legislation by a factor of 6 points versus men.

#SoraAI While everyone keeps talking about #SoraAI
— (@)

Support for a few other notable regulation types were ranked highly by respondents. These included those who think there should be accountability rules for companies responsible for AI content output (39%), those who feel there should be stricter privacy laws for collecting user data (34%), and those who believe ethics standards for AI should be developed (33%).

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

CES diary day one: AI everything



The Consumer Electronics Show in Las Vegas combines the tech world's most incredible, weirdest, and most useless impulses in one show. You get a sneak peek at the gadgets and gear that will make a splash in the coming year. It’s 2 million square feet, 200,000 people, and 5,000 companies coming together to showcase the best that Silicon Valley and the world have to offer. It also is acres of silly products no one needs. Does your oven need to have Alexa integration? Probably not. Does a massage chair need to have artificial intelligence? I’m going to go out on a limb and say no.

The name of the game this year is AI. Every conceivable product touts its integration with this burgeoning tech. The usefulness of a large language model for your toilet remains to be seen. But it’s the hot piece of technology every journalist and company is keen to promote, so it’s ubiquitous.

Landing in Vegas and driving into the Strip always reminds me of how much Vegas is America distilled into a city. Not the civic-minded ideals of Americana, but rather a decadent corporation that can fulfill every desire our late-stage capitalist society can imagine. It’s opulence and vice, charisma and cringe, all in a desert mirage. Now with a giant sphere staring at tourists with its all-seeing eye, but more on that next post.

David Becker/Formula 1/Getty Images

Getting in midday, I decided to stick to the Venetian and Wynn to explore their convention halls. I’ve always loved walking Eureka Park, reserved for up-and-coming startup companies hunting for VC money. There’s a fantastic vibe of enthusiasm and pure entrepreneurial spirit mixed with huckster vibes, making for an exhilarating atmosphere.

There’s also the reminder of why technology can be so cool when it can benefit society in novel ways. I saw two companies trying to help blind people with haptic inputs to help them “see” the world. One company used a cane with inputs for blind children, and the other used glasses to help blind people walk around by buzzing when something was blocking their paths. That’s pretty darn cool.

You can also find Daymond John from "Shark Tank" promoting an amazing wireless TV.

Invariably, there will be tech that terrifies you. Going through the Amazon House of the Future was one of those moments. There are beds that track your sleep patterns and glasses that allow you to talk to Alexa 24/7 as it pumps sound into your brain.

As well as baby's first touch screen.

But what was truly disturbing was the hell-spawned monstrosity of creepiness called Moxie. It’s a robot/doll with a human-like face that uses ChatGPT to “talk” to young children. It’s almost impossible to express how off-putting this product was.

Creepy kids robot has emotions! 😳😳😳 youtube.com

In the future, we won’t have to raise our children; we can just rely on demonic AI androids to do it for us. And if you resist they can send a different AI robot to hunt you down.

When will computers be smarter than humans? Return asked top AI experts: Anton Troynikov



The 2020s have seen unprecedented acceleration in the sophistication of artificial intelligence, thanks to the rise of large language model technology. These machines can perform a wide range of tasks once thought to be only solvable by humans: write stories, create art from text descriptions, and solve complex tasks and problems they were not trained to handle.
We posed this question to six AI experts, including James Poulos, roon, Max Anton Brewer, Robin Hanson, and Niklas Blanchard. — Eds.

1. What year do you predict, with 50% confidence, that a machine will have artificial general intelligence — that is, when will it match or exceed most humans in every learning, reasoning, or intellectual domain?
2. What changes to society will this affect within five years of occurring?

Anton Troynikov

AGI will be here by 2032. Then will come pandemonium — but be optimistic. 2032. My timeline is short, though perhaps not as short as some others, because I am increasingly of the opinion that the human intellect is not especially complex relative to other physical systems.

In robotics, there is an observation referred to as Moravec’s paradox. At the dawn of AI research in the 1950s, it was thought that cognitive tasks which are generally difficult for humans — playing chess, proving mathematical theorems, and the like — would also be difficult for machines. Sensorimotor tasks that are easy for humans, like perceiving the world in three dimensions and navigating through it, were thought to also be easy for machines. Famously, the general problem of computer vision (a field in which I’ve spent a large fraction of my career so far), was supposed to be solved in the summer of 1966.

These assumptions turned out to be fatally flawed, and the failure to create machines that could successfully interact with the physical world was one of the causes of the first AI winter when research and funding for AI projects cooled off.

Hans Moravec, for whom the paradox is named, suggested that the reason for this is the relatively recent development, in evolutionary terms, of the human prefrontal cortex, which handles abstract reasoning. In contrast, the structures responsible for sensorimotor functions, which we share with most other higher vertebrates, have existed for billions of years and are, therefore, very highly developed.

This also explains why we hadn’t (and to a large extent, still have not) managed to replicate evolved sensorimotor performance by reasoning about it; human intellect is too immature to reason about the function of the sensorimotor system itself.

Machine learning, however, represents a way to apprehend the world without relying on human intellect. Like evolution, machine learning is a purely empirical process, a general-purpose class of machines for ingesting data, finding patterns, and making predictions based on these patterns. It does not make deductions, nor does it rely on abstractions. In fact, the field of AI interpretability exists because the way in which AI actually functions is alien to the human intellect.

Given sufficient data, and enough computational power, AI is capable of determining ever more complex patterns and making ever more complex predictions. The ways in which it will do so will necessarily be increasingly alien as it outstrips our own capacity to find and understand these patterns. A concrete demonstration of this principle is the success with which AI has been able to model language. Linguists have been unable to provide any successful framework for automatic translation for the entire history of the discipline. AI cracked the problem as soon as enough data and computing were available, using extremely general methods.

Language is an expression of reason. An emulation of reason itself — through the prediction of what a human would reason with a mechanism alien to that reason — cannot be far behind. We’ll get there not because AI became particularly powerful but because the human intellect is, in the grand scheme of things, rather weak.

Within five years of human-level AI being created, there will be initial pandemonium, followed by normalization. I am generally optimistic about humanity’s future, but foundational technological progress has always come with upheaval. Yes, we got the printing press, but we got the Thirty Years’ War along with it.

I don’t presume to know what shape the upheavals will take, but they are likely to be foundational as societies must reorient around the capability to produce machine intelligences as good as the average human at will. But we’ll figure it out.

Anton Troynikov has spent the last seven years working in AI and robotics as a researcher and engineer. His company, Chroma, makes AI better by increasing its interpretability.

'It’s like interacting with a human': Portland installs robot security as part of surveillance strategy to lower crime

'It’s like interacting with a human': Portland installs robot security as part of surveillance strategy to lower crime



Portland, Oregon, is the latest city to utilize a mobile robot in place of patrolling security guards to lower crime rates in its downtown core.

City officials partnered with community improvement activists and corporate property owners to implement a 120-foot mural in downtown Portland as part of a revitalization program.

As KPTV Fox 12 Oregon reported, the project included a painting on the walls of the U.S. Bancorp Tower parking garage. The city hoped that the art project will be effective as part of sprucing up the city's downtown area that has been typically rife with crime.

To help guard the same area, the city has installed more than 200 surveillance cameras along with a new robot addition called "Rob."

The patrolling robot navigates a local parking garage 24 hours a day and is equipped with thermal imaging and a two-way intercom for citizens to speak to the operator.

"It’s like interacting with a human, because there is a human on the other side," said Keren Eichen of Unico Properties, a private equity and real estate company. "If you stop and speak to the robot, you know that there’s someone on the other side who’s answering your questions who can give you directions, can tell you happy holidays."

"This is easier than staffing security because he doesn’t get tired, he doesn’t get cold," Eichen added.

Mark Wells, from a group called Portland Clean and Safe, touted the effects of the new surveillance and said that there has been a sharp decrease in crime.

"We looked at reported crime for assault, vandalism, theft and in November, we a saw a 50% decrease last month than we did the prior five months," Wells claimed. "Crime doesn’t thrive in areas that are vibrant and full of people walking, eating, enjoying the amenities downtown."

However, according to official Portland crime statistics, from January to October 2023, downtown Portland saw 4,227 property crimes, 688 assaults, 40 sex offenses, and 232 "society" crimes, which included drug offenses.

During the same time period in 2022, the downtown area had 3,798 property crimes, 664 assaults, 31 sex offenses, and only 93 societal crimes.

Offenses in the area totaled 4,601 for January-October 2022, nearly 600 fewer than the 5,198 offenses during the same time period of 2023.

Portland is not the first major U.S. city to implement robotic security forces. Cleveland deployed a similar robot named Sam in August 2023, with the city stating that the robot "loves hugs and selfies."

In April 2023, New York Mayor Eric Adams told citizens they "cannot be afraid of [the technology]," when the New York Police Department deployed their own mobile security machine in New York City subways.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!