‘The Terminator’ creator warns: AI reality is scarier than sci-fi



In 1984, director James Cameron introduced a chilling vision of artificial intelligence in “The Terminator.” The film’s self-aware AI, Skynet, launched nuclear war against humanity, depicting a future where machines outpaced human control. At the time, the idea of AI wiping out civilization seemed like pure science fiction.

Now, Cameron warns that reality may be even more alarming than his fictional nightmare. And this time, it’s not just speculation — he insists, “It’s happening.”

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society.

As AI technology advances at an unprecedented pace, Cameron has remained deeply involved in the conversation. In September 2024, he joined the board of Stability AI, a UK-based artificial intelligence company. From that platform, he has issued a stark warning — not about rogue AI launching missiles, but about something more insidious.

Cameron fears the emergence of an all-encompassing intelligence system embedded within society, one that enables constant surveillance, manipulates public opinion, influences behavior, and operates largely without oversight.

Scarier than the T-1000

Speaking at the Special Competitive Studies Project's AI+Robotics Summit, Cameron argued that today’s AI reality is “a scarier scenario than what I presented in ‘The Terminator’ 40 years ago, if for no other reason than it’s no longer science fiction. It’s happening.”

Cameron isn’t alone in his concerns, but his perspective carries weight. Unlike the military-controlled Skynet from his films, he explains that today’s artificial general intelligence won’t come from a government lab. Instead, it will emerge from corporate AI research — an even more unsettling reality.

“You’ll be living in a world you didn’t agree to, didn’t vote for, and are forced to share with a superintelligent entity that follows the goals of a corporation,” Cameron warned. “This entity will have access to your communications, beliefs, everything you’ve ever said, and the whereabouts of every person in the country through personal data.”

Modern AI doesn’t function in isolation — it thrives on data. Every search, purchase, and click feeds algorithms that refine AI’s ability to predict and influence human behavior. This model, often called “surveillance capitalism,” relies on collecting vast amounts of personal data to optimize user engagement. The more an AI system knows — preferences, habits, political views, even emotions — the better it can tailor content, ads, and services to keep users engaged.

Cameron warns that combining surveillance capitalism with unchecked AI development is a dangerous mix. “Surveillance capitalism can toggle pretty quickly into digital totalitarianism,” he said.

What happens when a handful of private corporations control the world’s most powerful AI with no obligation to serve the public interest? At best, these tech giants become the self-appointed arbiters of human good, which is the fox guarding the hen house.

New, powerful, and hooked into everything

Cameron’s assessment is not an exaggeration — it’s an observation of where AI is headed. The latest advancements in AI are moving at a pace that even industry leaders find distressing. The technological leap from ChatGPT-3 to ChatGPT-4 was massive. Now, frontier models like DeepSeek, trained with ideological constraints, show AI can be manipulated to serve political or corporate interests.

Beyond large language models, AI is rapidly integrating into critical sectors, including policing, finance, medicine, military strategy, and policymaking. It’s no longer a futuristic concept — it’s already reshaping the systems that govern daily life. Banks now use AI to determine creditworthiness, law enforcement relies on predictive algorithms to assess crime risk, and hospitals deploy machine learning to guide treatment decisions.

These technologies are becoming deeply embedded in society, often with little transparency or oversight. Who writes the algorithms? What biases are built into them? And who holds these systems accountable when they fail?

AI experts like Geoffrey Hinton, one of its pioneers, along with Elon Musk and OpenAI co-founder Ilya Sutskever, have warned that AI’s rapid development could spiral beyond human control. But unlike Cameron’s Terminator dystopia, the real threat isn’t humanoid robots with guns — it’s an AI infrastructure that quietly shapes reality, from financial markets to personal freedoms.

No fate but what we make

During his speech, Cameron argued that AI development must follow strict ethical guidelines and "hard and fast rules."

“How do you control such a consciousness? We embed goals and guardrails aligned with the betterment of humanity,” Cameron suggested. But he also acknowledges a key issue: “Aligned with morality and ethics? But whose morality? Christian, Islamic, Buddhist, Democrat, Republican?” He added that Asimov’s laws could serve as a starting point to ensure AI respects human life.

But Cameron’s argument, while well-intentioned, falls short. AI guardrails must protect individual liberty and cannot be based on subjective morality or the whims of a ruling class. Instead, they should be grounded in objective, constitutional principles — prioritizing individual freedom, free expression, and the right to privacy over corporate or political interests.

If we let tech elites dictate AI’s ethical guidelines, we risk surrendering our freedoms to unaccountable entities. Instead, industry standards must embed constitutional protections into AI design — safeguards that prevent corporations or governments from weaponizing these systems against the people they are meant to serve.

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society. The question is no longer whether AI will reshape the world but who will shape AI.

As Cameron’s films have always reminded us: The future is not set. There is no fate but what we make. If we want AI to serve humanity rather than control it, we must act now — before we wake up in a world where freedom has been quietly coded out of existence.

DARPA is out of control



Few organizations embody the darker side of technological advancement like DARPA, the U.S. Department of Defense’s research arm. From stealth aircraft to the foundation of the internet, its innovations have reshaped warfare and infiltrated daily life. As anyone familiar with government agencies might expect, DARPA routinely crosses ethical lines, fueling serious concerns about privacy and control. Its relentless pursuit of cutting-edge technology has turned it into a force for domestic surveillance and behavioral manipulation. The agency operates with near-impunity, seamlessly shifting its battlefield innovations into the lives of ordinary Americans.

This precrime framework carries Orwellian implications.

Precrime predictions and de-banking dystopia

One of DARPA's most unsettling ventures is its development of an algorithmic Theory of Mind, a technology designed to predict and manipulate human behavior by mimicking an adversary's situational awareness. Simply put, this isn’t just spying; it’s a road map for controlling behavior. While it's framed as a military tool, the implications for civilian life are alarming. By harvesting massive amounts of behavioral data, DARPA aims to build algorithms that can predict decisions, emotions, and actions with unnerving precision. Imagine a world where such insights are weaponized to sway public opinion, deepen divides, or silence dissent before it even begins. Some might say we’re already there. Perhaps we are — but it can always get worse. Presented as a matter of national security, this kind of psychological manipulation poses a direct threat to free will and informed consent.

We live in a time when major agencies have shifted their focus inward. Domestic terrorism has become their new obsession. And in this climate, all Americans are fair game. The same surveillance and control mechanisms once reserved for foreign threats are now being quietly repurposed for monitoring, influencing, and manipulating the very people they claim to protect.

Equally alarming is DARPA’s Anticipatory and Adaptive Anti-Money Laundering program. Using artificial intelligence to predict illicit financial activities before they occur may sound like a noble pursuit, but this precrime framework carries Orwellian implications. A3ML casts an expansive surveillance net over ordinary citizens, scrutinizing their financial transactions for signs of wrongdoing. And as we all know, algorithms are far from infallible. They’re prone to bias, misinterpretation, and outright error, leaving individuals vulnerable to misidentification and false accusations. Consider the unsettling idea of being labeled a financial criminal because an algorithm misreads your spending habits. Soon, this won’t just be a hypothetical — it will be a reality.

Things are already bad enough.

Marc Andreessen, in a recent interview with Joe Rogan, highlighted the growing scourge of de-banking in America, where individuals sympathetic to Trump are unfairly targeted. This troubling trend underscores a larger issue: Algorithms, while often portrayed as impartial, are far from it. They’re engineered by humans, and in Silicon Valley, most of those humans lean left. Politically, the tide may be turning, but Silicon Valley remains dangerously blue, shaping systems that reflect its own ideological biases.

Without transparency and accountability, these systems risk evolving into even more potent tools of financial oppression, punishing innocent people and chipping away at the last shreds of trust in public institutions. Even worse, we could end up in a society where every purchase, every transaction, is treated like a potential red flag. In other words, a system eerily similar to China’s is looming — and it’s closer than most of us want to admit.

History’s lessons

These two programs align disturbingly well with DARPA’s history of domestic surveillance, most famously represented by the Total Information Awareness program. Launched after 9/11, TIA aimed to aggregate and analyze personal data on a massive scale, using everything from phone records to social media activity to predict potential terrorist threats. The program’s invasive methods sparked public outrage, leading to its official termination — though many believe its core technologies were quietly repurposed. This raises a critical question: How often do DARPA’s military-grade tools slip into civilian use, bypassing constitutional safeguards?

Too often, I suggest.

Who’s watching the watchers?

The implications of DARPA’s programs cannot be overstated. Operating under a dangerous degree of secrecy, the agency remains largely shielded from public scrutiny. This lack of transparency, combined with its sweeping technological ambitions, makes it nearly impossible to gauge the true extent of its activities or the safeguards — if any exist to prevent abuse.

We must ask how DARPA’s tools could be turned against the citizens they claim to protect. What mechanisms ensure that these technologies aren’t abused? Who holds DARPA accountable? Without strong oversight and clear ethical guidelines, the line between protecting the public and controlling it continues to blur.

Let’s hope someone in Donald Trump’s inner circle is paying attention — because the stakes couldn’t be higher.

DARPA is out of control.

Eyes everywhere: The AI surveillance state looms



Rapid advancements in artificial intelligence have produced extraordinary innovation, but they also raise significant concerns. Powerful AI systems may already be shaping our culture, identities, and reality. As technology continues to advance, we risk losing control over how these systems influence us. We must urgently consider AI’s growing role in manipulating society and recognize that we may already be vulnerable.

At a recent event at Princeton University, former Google CEO Eric Schmidt warned that society is unprepared for the profound changes AI will bring. Discussing his recent book, “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” Schmidt said AI could reshape how individuals form their identities, threatening culture, autonomy, and democracy. He emphasized that “most people are not ready” for AI’s widespread impact and noted that governments and societal systems lack preparation for these challenges.

In countries already compromising privacy, AI’s proliferation could usher in an omnipotent state where freedoms become severely restricted.

Schmidt wasn’t just talking about potential military applications; he was talking about individuals’ incorporation of AI into their daily lives. He suggested that future generations could be influenced by AI systems acting as their closest companions.

“What if your best friend isn’t human?” Schmidt asked, highlighting how AI-driven entities could replace human relationships, especially for children. He warned that this interaction wouldn’t be passive but could actively shape a child’s worldview — potentially with a cultural or political bias. If these AI entities become embedded in daily life as educational tools, digital companions, or social media curators, they could wield unprecedented power to shape individual identity.

This idea echoes remarks made by OpenAI CEO Sam Altman in 2023, when he speculated about the potential for AI systems to control or manipulate content on platforms like Twitter (now X).

“How would we know if, like, on Twitter we were mostly having LLMs direct the … whatever’s flowing through that hive mind?” Altman asked, suggesting it might be impossible for users to detect whether the content they see — whether trending topics or newsfeed items — was curated by an AI system with an agenda.

He called this a “real danger,” underscoring AI’s capacity to subtly — and without detection — manipulate public discourse, choosing which stories and events gain attention and which remain buried.

Reshaping thought, amplifying outrage

The influence of AI is not limited to identity alone; it can also extend to the shaping of political and cultural landscapes.

In its 2019 edition of the Global Risks Report, the World Economic Forum emphasizes how mass data collection, advanced algorithms, and AI pose serious risks to individual autonomy. A section of the report warns how AI and algorithms can be used effectively to monitor and shape our behaviors, often without our knowledge or consent.

The report highlights that AI has the potential to create “new forms of conformity and micro-targeted persuasion,” pushing individuals toward specific political or cultural ideologies. As AI becomes more integrated into our daily lives, it could make individuals more susceptible to radicalization. Algorithms can identify emotionally vulnerable people, feeding them content tailored to manipulate their emotions and sway their opinions, potentially fueling division and extremism.

We have already seen the devastating impact of similar tactics in the realm of social media. In many cases, these platforms use AI to curate content that amplifies outrage, stoking polarization and undermining democratic processes. The potential for AI to further this trend — whether in influencing elections, radicalizing individuals, or suppressing dissent — represents a grave threat to the social fabric of modern democratic societies.

In more authoritarian settings, governments could use AI to tighten control by monitoring citizens’ every move. By tracking, analyzing, and predicting human actions, AI fosters an environment ripe for totalitarian regimes to grow.

In countries already compromising privacy, AI’s proliferation could usher in an omnipotent surveillance state where freedoms become severely restricted.

Navigating the AI frontier

As AI continues to advance at an unprecedented pace, we must remain vigilant. Society needs to address the growing potential for AI to influence culture, identity, and politics, ensuring that these technologies are not used for manipulation or control. Governments, tech companies, and civil society must work together to create strong ethical frameworks for AI development and deployment that are devoid of political agendas and instead embrace individual liberty and autonomy.

The challenges are complex, but the stakes are high. Schmidt, Altman, and others in the tech industry have raised alarms, and it is crucial that we heed their warnings before AI crosses an irreversible line. We need to establish global norms that safeguard privacy and autonomy, promoting transparency in how AI systems are used and ensuring that individuals retain agency over their own lives and beliefs.

Big Brother’s bigger brother: The Five Eyes’ war on your freedom



“Think of the children.”

Few phrases have been more effective at dismantling rights and silencing opposition. It’s the ultimate rhetorical Trojan horse, bypassing rational debate to smuggle in crippling, inhumane policies.

Historically, cries of “save the children” have been a powerful tool to drive moral panics that systematically erode civil liberties.

The Five Eyes alliance — an Orwellian pact of surveillance states spanning the U.S., U.K., Canada, Australia, and New Zealand — has perfected this tactic. Its latest campaign claims to protect children from harm. Don’t be fooled. The real goal is to invade every corner of your digital life. Marketed as a crackdown on platforms like TikTok and Discord, accused of radicalizing youth, these efforts pave the way for a surveillance system more destructive than anything seen before. Big Brother has a Bigger Brother.

Erasing encryption

Now, to be clear, TikTok is a serious problem. The app is a digital honey trap for the Chinese Communist Party, vacuuming up data and warping young minds with addictive content. But Beijing doesn’t have a monopoly on exploitation. The United States, alongside its Five Eyes allies, is quietly turning “protecting children” into a blunt instrument to crush dissent and invade every corner of your life. “Violent extremist content is more accessible, more digestible, and more impactful than ever before,” claims the Five Eyes initiative. This assertion may justify increasingly invasive measures under the pretext of preventing exposure to such content.

Which takes us to the heart of this initiative: a relentless assault on encryption — the very backbone of digital privacy. By undermining encryption, the alliance aims to tear down the barriers safeguarding your most sensitive information, from private conversations to financial records.

The push to weaken encryption has nothing to do with safety; it’s about control. Demolishing encryption protections doesn’t just expose Americans to government overreach; it also leaves them wide open to cybercriminals, identity thieves, and hostile foreign actors. And in a darkly ironic twist, it makes children — the very people these elites claim to be protecting — far more vulnerable to the same predators they claim to fight. Back doors in encryption don’t discriminate. They become open doors, waiting to be exploited by anyone who can breach them.

Learning from history

Historically, cries of “save the children” have been a powerful tool to drive moral panics that systematically erode civil liberties. In America, this tactic has repeatedly served as justification for policies that expand state power at the expense of individual freedoms. During the Red Scare of the 1950s, protecting children from communist indoctrination became a rallying point for sweeping censorship and loyalty oaths. Teachers were fired, school curriculums gutted, and free expression stifled — all in the name of shielding youth from so-called subversive ideas.

The Five Eyes’ latest initiative is nothing more than the same authoritarian playbook, updated for the digital age.

“The online environment allows minors to interact with adults and other minors, allowing them to view and distribute violent extremist content which further radicalises themselves and others,” it reads. This highlights the potential for mass monitoring of minors’ online activities, raising concerns about privacy and disproportionate responses. More troublingly, it sets the stage for invasive measures that target young people under the pretense of safety.

The emotional appeal of protecting youth is, yet again, being used to rally support for policies that concentrate power in the hands of the state. The pattern is unmistakable: Invoke fear, demand action, and chip away at freedoms in the process.

Same stuff, different decade.

The new scare

Today, it’s encryption in the crosshairs. Tomorrow, it could be the criminalization of dissent. Consider the language of the Five Eyes campaign, rife with vague terms like “malign actors” and “extremism.” These are not carefully defined threats but malleable excuses, broad enough to ensnare journalists, whistleblowers, or anyone daring to criticize those in power.

“Minors are increasingly normalising violent behaviour in online groups, including joking about carrying out terrorist attacks and creating violent extremist content.” The idea of monitoring and interpreting minors’ online jokes or behaviors could lead to punitive actions against young people for relatively harmless activities. Sharing a meme, for instance, could be misconstrued as evidence of radicalization, turning a harmless joke into a justification for invasive surveillance or even legal consequences.

The danger isn’t hypothetical. The United States already leads the world in invasive surveillance.

The initiative insists that a “renewed whole-of-society approach is required to address the issue of minors radicalising to violent extremism.” Such broad language could and should be interpreted as a mandate for expansive powers that infringe on individual rights and freedoms. This approach might involve mass data collection or enlisting private entities as de facto surveillance agents.

The danger isn’t hypothetical. The United States already leads the world in invasive surveillance. Think of the NSA’s PRISM program, exposed by Edward Snowden, which harvested Americans’ emails, messages, and browsing history under the flimsiest of legal pretexts. Weakening encryption will only supercharge this predation, turning every device into a surveillance tool.Yes, things are already dire — privacy is virtually nonexistent. But it can always get worse. The erosion of rights doesn’t happen all at once; it’s a slow, relentless grind, and complacency is its greatest ally.

America must push back against this descent. TikTok is not the only enemy. If the Five Eyes initiative succeeds, future generations will curse us for our cowardice.

John Bolton Asks Deep State To Deep-Six Trump Nominees Before They Fix Corrupt Intel Agencies

Warhawk and former National Security Adviser John Bolton asked the deep state on Wednesday to deep-six former Hawaii Democrat Rep. Tulsi Gabbard and former Florida Republican Rep. Matt Gaetz because their nominations to President-elect Donald Trump’s cabinet threaten the establishment. Trump tapped Gaetz to serve as attorney general and Gabbard to serve as director of […]

How your smart TVs are spying on you and your loved ones



Once, not that long ago, televisions were beloved devices that brought families together for regular rituals of laughter, drama, and storytelling. But today, as we settle in for a night of streaming on our sleek smart TVs, that warmth feels increasingly distant. These modern monstrosities offer endless options and voice-activated convenience, but this comes at a steep price. While we put our feet up and enjoy our favorite shows, we’re also inviting a level of surveillance into our homes that would have been unthinkable a few decades ago.

According to a new report by the Center for Digital Democracy, smart TVs have become yet another cog in a massive, data-driven machine. Specifically, this machine is an ecosystem that harvests viewer data with military-like precision, prioritizing profits over privacy, individual autonomy, and, arguably, our collective well-being.

Big Brother isn't just in your living room — he knows what you’re watching, what you’re thinking, what you’re buying, and even where you’re going.

A Trojan horse in disguise

As the report details, these devices function as sophisticated surveillance tools, tracking viewers' every move across platforms. From Tubi to Netflix to Disney+, streaming services rely heavily on various data collection mechanisms to fuel a relentless advertising engine. These companies boast about their ability to collect "billions of rows of data" on their viewers, using machine learning algorithms to personalize the entire experience — from what shows are recommended to the ads viewers are served.

Tools like Automatic Content Recognition — built into TVs by companies such as LG, Samsung, and Roku — track and analyze everything you watch. ACR collects data frame by frame, creating detailed viewer profiles that are then used for targeted advertising. These profiles can include information about the devices in your home and the content you purchase, all feeding into a continuous feedback loop for advertisers. The more you watch, the more the system learns about you — and the greater its ability to shape your choices. The “non-skippable” ads, personalized to reflect intimate knowledge about viewers' behaviors and vulnerabilities, are particularly disturbing. They are engineered to be as compelling and intrusive as possible.

Smart TVs are living up to their names. They know everything about you. And I mean absolutely everything.

Data-driven manipulation

The streaming industry has rapidly grown into one of the most lucrative advertising sectors, with streaming platforms like Disney+, Netflix, and Amazon Prime attracting billions in ad revenue. As the report warns, these platforms now use advanced generative AI and machine learning to produce thousands of hyper-targeted ads in seconds — ads for Mom, ads for Dad, and ads for the little ones. By employing tools like identity graphs, which compile data from across an individual’s digital footprint, streaming services can track and target viewers on their televisions and throughout their entire digital lives. That's right. Smart TVs seamlessly interact with other smart devices, basically "talking" to each other and sharing valuable gossip.

This data collection goes far beyond tracking viewing habits. The report reveals that companies like Experian and TransUnion have developed identifiers that encompass deeply personal details, such as health information, financial status, and political views. Who will you vote for in November? You already know — and so does your TV.

Crooked capitalism

At its core, capitalism has been a driving force of innovation, progress, and prosperity. Its brilliance lies in its ability to harness human creativity and ambition, rewarding those who bring value to the market. In its purest form, capitalism is entirely meritocratic. Capitalism has lifted millions out of poverty through competition and the pursuit of profit. Capitalism helped make America the greatest nation known to man.

However, we see today a gross distortion of capitalism’s core principles. Surveillance capitalism has taken the place of pure capitalism. Instead of fostering innovation, this monstrous model feeds off personal data, often without our knowledge or consent. It preys particularly on vulnerable groups like children, exploiting their behaviors and emotions to turn a profit. The same system that once championed freedom now thrives on violating privacy, reducing human experiences to commodities.

Smart TVs and surveillance capitalism go hand in hand.

This raises an urgent question: What can we do about it? While it’s tempting to grab a sledgehammer and smash your nosy device into a million pieces, more practical solutions exist.

Start by diving into your TV's settings and disabling data tracking features such as ACR. You can also refuse to sign up for accounts or services that require extensive data sharing. For those willing to pay a bit more, opting for ad-free services can limit the data collected on your viewing habits, though it’s not a foolproof solution.

Additionally, advocating for stronger regulations on data privacy and transparency in advertising technologies is crucial. As consumers, we need to push policymakers to implement stricter laws that hold companies accountable for the data they collect and how they use it. Organizations like the Center for Digital Democracy, which authored this important report, are already fighting for these changes. This is a matter of critical importance. Close to 80% of homes in the U.S. have a smart TV.

Big Brother isn't just in your living room — he knows what you’re watching, what you’re thinking, what you’re buying, and even where you’re going. Not for the sledgehammer, I hope.

Your car is SPYING on you — and it’s only going to get worse



If you thought your personal property was private, then you might not have read the agreement you signed when you purchased it — and this is especially true when it comes to your vehicle.

Car expert Lauren Fix is sounding the alarm, explaining that in the infrastructure bill of 2021, “there’s a kill switch law.”

“That kill switch law allowed them to listen in your car, to monitor your eyes, to literally track all of your information,” Fix tells Hilary Kennedy and Matthew Peterson of “Blaze News Tonight.”

“And what are they doing with that information?” She continues, “We know that manufacturers are hurting financially. We see a lot of cars sitting on lots, and as long as prices keep getting higher, their profit margins are shrinking.”


So they sell the data to places like insurance companies and the police department.

“Then, just recently Ford decided to create a patent that would sell all your information directly to the police department so that they wouldn’t have to go through some sort of contract,” Fix explains, adding, “Which, again, is a violation of our privacy and really infuriates me.”

While Americans are right to be infuriated, the problem is that they do disclose this information in the paperwork — which almost no one actually reads.

“It’s going to get worse because in 2026, all cars are going to have a kill switch in them,” Fix says. “That’s going to tell whether you’re under the influence of something by the start/stop button.”

“Is all of this perfectly legal?” Peterson asks, concerned.

“Well, it actually probably isn’t. But we sign those agreements,” Fix says.

Want more from 'Blaze News Tonight'?

To enjoy more provocative opinions, expert analysis, and breaking stories you won’t see anywhere else, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Drive carefully — your car is watching



It's coming from inside the car!

I've told you about the AI-enabled cameras that can tell if you're speeding — or on your phone. Now, car manufacturers are joining the assault on your privacy.

'Our investigation revealed that General Motors has engaged in egregious business practices that violated Texans’ privacy and broke the law. We will hold them accountable.'

Take Ford, for example. The iconic American company recently filed a not-so-American patent for technology that would allow a car to snitch on drivers.

Entitled "Systems and Methods for Detecting Speeding Violations" — not quite as catchy as "Built Ford Tough" — the patent filing details a system that would use vehicles' cameras and sensors to detect speeding motorists and report them to authorities.

The filing includes basic sketches and flowcharts illustrating how this technology senses speed violations, activates cameras to capture images, and transmits data to nearby "pursuit vehicles" or logs it to a server. The captured data, including speed, GPS location, and clear imagery or video, can then be sent to authorities for potential action.

According to Ford, it is developing this technology for police cars. In other words, don't worry: This invasive surveillance tech will be exclusively in the hands of the state.

And I'm sure the company would never think of adapting it so your own car can inform any nearby police that they should pull you over.

Then there's GM.

Did you know the company's so concerned about empowering you to keep your data secure that it just consolidated five different lengthy privacy statements into one disclosure document?

Talk about putting the customer first! Yeah, a massive lawsuit and widespread public backlash have a way of encouraging that.

Last month, Texas Attorney General Ken Paxton filed suit on behalf of the state against GM, accusing the automaker of installing technology on more than 14 million vehicles to collect data about drivers, which it then sold to insurers and other companies without drivers’ consent.

The suit contends that the data was used to compile “Driving Scores” assessing whether more than 1.8 million Texas drivers had “bad” habits such as speeding, braking too fast, steering too sharply into turns, not using seatbelts, and driving late at night. Insurers could then use the data when deciding whether to raise premiums, cancel policies, or deny coverage.

The technology was allegedly installed on most GM vehicles starting with the 2015 model year. Paxton said GM’s practice was for dealers to make unwitting consumers who had just completed the stressful buying and leasing process believe that enrolling in its OnStar diagnostic products, which collected the data, was mandatory.

“Companies are using invasive technology to violate the rights of our citizens in unthinkable ways,” Paxton said in a statement. “Our investigation revealed that General Motors has engaged in egregious business practices that violated Texans’ privacy and broke the law. We will hold them accountable.”

This isn't the first time Texas has stood up for its drivers. In 2019 Governor Greg Abbott signed a bill to ban red-light cameras, two years after KXAN-NBC in Austin, Texas, reported that almost all cities with red-light cameras had illegally issued traffic tickets.

Their investigation also found that drivers paid the city of Austin over $7 million in fines since the cameras were installed, and cities in Texas made over $500 million from the cameras since 2007.

Tech titan Larry Ellison teases AI-powered surveillance state that will keep you on your 'best behavior'



Oracle chairman and chief technology officer Larry Ellison, the world's second-richest man, recently revealed how his company could furnish authorities with the technological means to better surveil the populace and socially engineer those involuntarily living their lives on camera.

"Citizens will be on their best behavior because we're constantly recording and reporting everything that is going on," Ellison said last week at the database and cloud computing company's financial analyst meeting. "It's AI that's looking at the cameras."

After discussing broadening and implementing surveillance systems in the health and education sectors, Ellison raised the matter of law enforcement applications and police body cameras.

'Truth is we don't really turn it off.'

"We completely redesigned body cameras," said the billionaire. "The camera's always on. You don't turn it on and off."

Whether an officer is having lunch with friends or in the lavatory, Oracle will never shut its eyes.

Ellison noted, for example, that if a police officer wants a moment of relative privacy so that he can go to the washroom, he must notify Oracle.

"We'll turn it off. Truth is, we don't really turn it off. What we do is we record it so no one can see it," said Ellison. "No one can get into that recording without a court order. You get the privacy you requested ... but if you get a court order, we will judge — I want to look at that, this so-called bathroom break."

"We transmit the video back to headquarters," continued the Oracle CTO, "and AI is constantly monitoring the video."

If AI spots behavior it has been trained to regard as suspicious, then it will flag it and issue an alert to the relevant authorities.

By constructing what is effectively a high-tech panopticon, Ellison indicated that police officers and citizens alike would be more inclined to behave as convention and law dictated they should "because we're constantly recording — watching and recording — everything that's going on."

Ellison indicated that this system of digital eyes on cars, drones, and humans amounts to "supervision."

The tech magnate framed these applications as benign — as ways to curb police brutality. However, Oracle has recently given cause to suspect that there is potential for abuse.

In July, Oracle agreed to pay $115 million to settle a lawsuit in which the company was accused of running roughshod over people's privacy by collecting their data and selling it to third parties, reported Reuters.

According to the plaintiffs, Oracle created unauthorized "digital dossiers" for hundreds of millions of people, which were then allegedly sold to marketers and other organizations.

Critics responding online to Ellison's remarks also expressed concerns over how such applications will all but guarantee a communist Chinese-style surveillance state in the West — something that's already under way in the U.K., one of the most surveilled countries on the planet.

'There isn't much not being watched by somebody.'

The U.K.'s former Home Office biometrics and surveillance commissioner Fraser Sampson told the Guardian before ending his term last year that AI was supercharging Britain's public-private "omni-surveillance" society.

"There was a lawyer back in 2010 who used the expression 'omni-surveillance,' and I think, yes, we are in that. There isn't much not being watched by somebody. The thing is, almost all of it's been watched by people on private devices. And they now share it, whether they want them to or not, with everybody, the police, the state, the foreign government, anybody," said Sampson.

"When all that needed a human to edit it, it wasn't an issue because no one was going to live long enough to get through 10 minutes. But now you can do it with AI editing. All of a sudden you can tap that ocean," added the watchdog.

The U.K. has ostensibly taken a turn for the worse under the current Labor government, which is working to greatly expand the use of live facial recognition technology.

While some have taken to keyboards to bemoan the growth of the Western surveillance state, so-called Blade Runner activists have, in recent years, taken to chopping down public and private cameras, including low-emission cameras.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Digital ID is coming: Will Americans lose freedom in the name of security?



America was founded on liberty and rights, but Big Tech and Big Government keep trying to take them away.

The latest example comes from the National Institute of Standards and Technology, whose National Cybersecurity Center of Excellence is currently working to develop wide-reaching digital IDs. More specifically, NIST is collaborating with tech companies and banks to link mobile driver’s licenses with people’s finances. The broader purpose is to work toward developing a digital ID for everyone that centralizes all their personal information, supposedly to boost cybersecurity and provide more convenience for financial transactions.

The more digital ID is developed in America, the more alternatives to digital ID will become rarer, more complex to use, and, eventually, outlawed or severely restricted.

Working with various associations, the California DMV, the Department of Homeland Security, Microsoft, iLabs, MATTR, OpenID Foundation, and various large financial institutions, including Wells Fargo and JPMorgan Chase, NIST has now contracted various digital identity specialist companies to implement the project.

According to NIST digital identity program lead Ryan Galluzzo, NIST’s advances are about allowing people to present ID in the most convenient and secure way possible while still allowing them to rely on traditional physical ID.

“We want to open up the use of modern digital pathways while still allowing for physical and manual methods whenever they may be necessary.”

By linking banking information with mobile driver's licenses, NIST will move one step forward to implementing a central digital ID that contains people’s private information. NIST promises that this new digital ID acceleration “will address ‘Know Your Customer/Customer Identification Program Onboarding and Access’ which will demonstrate the use of an mDL and/or Verifiable Credentials (VC) for establishing and accessing an online financial account.”

The project will move forward in three main steps. According to NIST, it will aim to standardize and promote “digital ID standards” while still respecting and maximizing “privacy and usability.” This digital ID project is currently in the build phase.

With technology that now analyzes how people walk and breathe and their irises, to identify them beyond a shadow of a doubt, and phones and GPS systems geolocating individuals at almost every moment of the day, digital ID is ripe for abuse by an authoritarian government or malicious actors. The easier it becomes for a citizen’s important data to be accessed by law enforcement, government, or bad actors, the closer we get to a digital panopticon in which citizens are constantly tracked and subject to potential suspicion while having no recourse to alternative methods of payment or identity.

This move forward linking mobile driver's licenses with banking is bigger news than it appears on the surface. While it can be easily justified and explained as necessary, innovative, and forward-thinking, the more digital ID is developed in America, the more alternatives to digital ID will become rarer, more complex to use, and, eventually, outlawed or severely restricted. What starts as an incentive or benefit all too often becomes a mandate and a requirement down the road. NIST’s moves to build up a more powerful and connected digital ID will inevitably lead to Americans becoming less free, regardless of how these policies are framed or how much of a positive spin they are given.