Netflix’s chilling new surveillance tools are watching YOU



There was a time, for a brief second, when Netflix felt like a genuine escape. No ads. No distractions. Just a moment of sacred silence before the next episode auto-played. YouTube, on the other hand, has always been the neighborhood hawker, jamming five-second countdowns and “skip” buttons between cat videos and clips of Candace Owens speaking with Harvey Weinstein. But Netflix? It felt different. Intentional. Entirely neutral.

Not any more.

We now know that YouTube, owned by Google (the company that famously deleted “don’t be evil” from its code of conduct), uses AI to analyze your viewing habits in real time. The company calls it Peak Points, a system that detects when you’re most emotionally invested. Not so it can recommend better content. No, it’s so YouTube can slice in an ad. A perfectly timed disruption — just as you’re crying, laughing, leaning in. Not after. During. Essentially, it’s manipulation dressed as optimization.

Soon you won’t be choosing shows. You’ll be chosen by them.

If Google pulling this stunt doesn’t surprise you, that’s because nothing Google does should surprise you. What should worry you, however, is Netflix quietly following suit, disguised beneath its polished UI and faux prestige. To be clear, this isn’t a case of algorithms nudging you toward rom-coms or action thrillers. This is full-blown behavioral harvesting, run out of what’s called “clean rooms," a fancy way of saying they’re still collecting everything, just behind closed doors. They promise it’s private. But they still track your habits, reactions, pauses, and clicks. They’re not watching you, they insist. Just everything you do.

Netflix’s ad-supported tier allows third-party data brokers — including Experian (more on this notorious credit score company in a minute) — to build a psychological profile on you. Your stress tells them what to sell. Your loneliness tells them when to sell it. Your late-night binge-watching isn’t just a pattern; it’s a profile. You think you’re relaxing, when in reality, you're participating in a lab study that you never signed up for. Not knowingly, anyway.

Netflix used to sell impressions. Now, however, it's selling intimacy — your intimacy. It's the kind of advertising that doesn’t feel like advertising because it’s been trained to mimic your tone, your mood, your hesitation. Mid-roll ads now talk back. Pause screens offer prompts and tailored suggestions based not on your genre preferences but on your emotional volatility.

Even rewinds are a metric now. Linger too long on one scene? It wasn’t just memorable — it was actionable. Every flicker of interest, every second you lean forward, becomes a flag for monetization. A signal to tweak the pitch, change the lighting, or modify the ad delivery window.

You’re not the customer any more. You’re the subject.

This is much more than targeted marketing. It is emotional extraction. Netflix and YouTube are conditioning you and your loved ones. The goal is no longer passive consumption. It’s emotive response mining. Once satisfied with getting your eyeballs, they now want what’s behind them.

And here’s the most worrying part: Their devious plan is working.

RELATED: Netflix shares blunt message to woke employees offended by its content: 'Netflix may not be the best place for you'

ROBERT SULLIVAN/AFP via Getty Images

You feel it when your pause screen suddenly knows you’re restless. You sense it when an ad knows you’re anxious. But you can’t prove it, because this isn’t surveillance as we used to know it. It’s ambient, implicit, and sanitized. Framed as “user experience.” But make no mistake, the living room has been compromised.

Netflix used to say, “See what’s next.” But increasingly, the real motto is “see what we see.” Every moment of attention, every flicker, flinch, or fast-forward, is a data point. Every glance is a gamble, wagered against your most vulnerable instincts.

Which brings us back to Experian. By partnering with the same data broker that helps banks deny loans, Netflix is making a statement. A troubling one.

Experian isn’t just some boring credit bureau. It’s one of the largest consumer data aggregators on the planet. It tracks what you buy, what you browse, where you live, how often you move, how many credit cards you have, what you watch, what you search, and what you owe. It then slices that information into little behavioral fragments to sell to advertisers, insurers, lenders, and now … to Netflix.

With 90 million U.S. users, Netflix has now integrated with a company whose entire business model revolves around profiling you — right down to your risk appetite, spending triggers, and likelihood of defaulting on a loan.

So while you're watching a true-crime documentary to unwind, Experian is in the back end, silently refining your “predictive segment.” Your favorite comedy special could now become a soft proxy for Experian to gauge how impulsive you are. That docuseries about minimalism? Great test case for your spending restraint. They don’t just want to know what you watch. They want to know what you’ll buy after. Or worse, what you’ll believe next.

RELATED: Upstart streamer Loor.TV is out to televise the conservative revolution

Loor.tv

The future isn’t one of generic binge-watching. It’s curated manipulation. Your partner just walked out? Cue romantic dramas … with targeted ads for dating apps. Watching a dystopian thriller? Insert ads for tech “solutions” to the very problems being dramatized.

Soon you won’t be choosing shows. You’ll be chosen by them. Not because they’re good, but because they serve a data-driven purpose. If you're a Netflix subscriber, perhaps it’s time to consider whether it still makes sense to continue funding the violation of your privacy.

If ‘Quiet Skies’ Surveillance Was Abuse, So Is The Rest Of The Surveillance State

Department of Homeland Security Secretary Kristi Noem announced Thursday that the “Quiet Skies” surveillance program was dismantled. It’s a good first step — but it’s just that: a first step. It’s time to dismantle the entire surveillance regime that grew out of post 9/11 panic. The “Quiet Skies Program” was launched in 2010 with the […]

Gabbard Reaffirms America’s Commitment To Protecting ‘Individual Liberty’ On The World Stage

'The more our freedoms are eroded, the more the foundations of our democracy are eroded.'

‘The Terminator’ creator warns: AI reality is scarier than sci-fi



In 1984, director James Cameron introduced a chilling vision of artificial intelligence in “The Terminator.” The film’s self-aware AI, Skynet, launched nuclear war against humanity, depicting a future where machines outpaced human control. At the time, the idea of AI wiping out civilization seemed like pure science fiction.

Now, Cameron warns that reality may be even more alarming than his fictional nightmare. And this time, it’s not just speculation — he insists, “It’s happening.”

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society.

As AI technology advances at an unprecedented pace, Cameron has remained deeply involved in the conversation. In September 2024, he joined the board of Stability AI, a UK-based artificial intelligence company. From that platform, he has issued a stark warning — not about rogue AI launching missiles, but about something more insidious.

Cameron fears the emergence of an all-encompassing intelligence system embedded within society, one that enables constant surveillance, manipulates public opinion, influences behavior, and operates largely without oversight.

Scarier than the T-1000

Speaking at the Special Competitive Studies Project's AI+Robotics Summit, Cameron argued that today’s AI reality is “a scarier scenario than what I presented in ‘The Terminator’ 40 years ago, if for no other reason than it’s no longer science fiction. It’s happening.”

Cameron isn’t alone in his concerns, but his perspective carries weight. Unlike the military-controlled Skynet from his films, he explains that today’s artificial general intelligence won’t come from a government lab. Instead, it will emerge from corporate AI research — an even more unsettling reality.

“You’ll be living in a world you didn’t agree to, didn’t vote for, and are forced to share with a superintelligent entity that follows the goals of a corporation,” Cameron warned. “This entity will have access to your communications, beliefs, everything you’ve ever said, and the whereabouts of every person in the country through personal data.”

Modern AI doesn’t function in isolation — it thrives on data. Every search, purchase, and click feeds algorithms that refine AI’s ability to predict and influence human behavior. This model, often called “surveillance capitalism,” relies on collecting vast amounts of personal data to optimize user engagement. The more an AI system knows — preferences, habits, political views, even emotions — the better it can tailor content, ads, and services to keep users engaged.

Cameron warns that combining surveillance capitalism with unchecked AI development is a dangerous mix. “Surveillance capitalism can toggle pretty quickly into digital totalitarianism,” he said.

What happens when a handful of private corporations control the world’s most powerful AI with no obligation to serve the public interest? At best, these tech giants become the self-appointed arbiters of human good, which is the fox guarding the hen house.

New, powerful, and hooked into everything

Cameron’s assessment is not an exaggeration — it’s an observation of where AI is headed. The latest advancements in AI are moving at a pace that even industry leaders find distressing. The technological leap from ChatGPT-3 to ChatGPT-4 was massive. Now, frontier models like DeepSeek, trained with ideological constraints, show AI can be manipulated to serve political or corporate interests.

Beyond large language models, AI is rapidly integrating into critical sectors, including policing, finance, medicine, military strategy, and policymaking. It’s no longer a futuristic concept — it’s already reshaping the systems that govern daily life. Banks now use AI to determine creditworthiness, law enforcement relies on predictive algorithms to assess crime risk, and hospitals deploy machine learning to guide treatment decisions.

These technologies are becoming deeply embedded in society, often with little transparency or oversight. Who writes the algorithms? What biases are built into them? And who holds these systems accountable when they fail?

AI experts like Geoffrey Hinton, one of its pioneers, along with Elon Musk and OpenAI co-founder Ilya Sutskever, have warned that AI’s rapid development could spiral beyond human control. But unlike Cameron’s Terminator dystopia, the real threat isn’t humanoid robots with guns — it’s an AI infrastructure that quietly shapes reality, from financial markets to personal freedoms.

No fate but what we make

During his speech, Cameron argued that AI development must follow strict ethical guidelines and "hard and fast rules."

“How do you control such a consciousness? We embed goals and guardrails aligned with the betterment of humanity,” Cameron suggested. But he also acknowledges a key issue: “Aligned with morality and ethics? But whose morality? Christian, Islamic, Buddhist, Democrat, Republican?” He added that Asimov’s laws could serve as a starting point to ensure AI respects human life.

But Cameron’s argument, while well-intentioned, falls short. AI guardrails must protect individual liberty and cannot be based on subjective morality or the whims of a ruling class. Instead, they should be grounded in objective, constitutional principles — prioritizing individual freedom, free expression, and the right to privacy over corporate or political interests.

If we let tech elites dictate AI’s ethical guidelines, we risk surrendering our freedoms to unaccountable entities. Instead, industry standards must embed constitutional protections into AI design — safeguards that prevent corporations or governments from weaponizing these systems against the people they are meant to serve.

Cameron is right to sound the alarm. AI is no longer a theoretical risk — it is here, evolving rapidly, and integrating into every facet of society. The question is no longer whether AI will reshape the world but who will shape AI.

As Cameron’s films have always reminded us: The future is not set. There is no fate but what we make. If we want AI to serve humanity rather than control it, we must act now — before we wake up in a world where freedom has been quietly coded out of existence.

DARPA is out of control



Few organizations embody the darker side of technological advancement like DARPA, the U.S. Department of Defense’s research arm. From stealth aircraft to the foundation of the internet, its innovations have reshaped warfare and infiltrated daily life. As anyone familiar with government agencies might expect, DARPA routinely crosses ethical lines, fueling serious concerns about privacy and control. Its relentless pursuit of cutting-edge technology has turned it into a force for domestic surveillance and behavioral manipulation. The agency operates with near-impunity, seamlessly shifting its battlefield innovations into the lives of ordinary Americans.

This precrime framework carries Orwellian implications.

Precrime predictions and de-banking dystopia

One of DARPA's most unsettling ventures is its development of an algorithmic Theory of Mind, a technology designed to predict and manipulate human behavior by mimicking an adversary's situational awareness. Simply put, this isn’t just spying; it’s a road map for controlling behavior. While it's framed as a military tool, the implications for civilian life are alarming. By harvesting massive amounts of behavioral data, DARPA aims to build algorithms that can predict decisions, emotions, and actions with unnerving precision. Imagine a world where such insights are weaponized to sway public opinion, deepen divides, or silence dissent before it even begins. Some might say we’re already there. Perhaps we are — but it can always get worse. Presented as a matter of national security, this kind of psychological manipulation poses a direct threat to free will and informed consent.

We live in a time when major agencies have shifted their focus inward. Domestic terrorism has become their new obsession. And in this climate, all Americans are fair game. The same surveillance and control mechanisms once reserved for foreign threats are now being quietly repurposed for monitoring, influencing, and manipulating the very people they claim to protect.

Equally alarming is DARPA’s Anticipatory and Adaptive Anti-Money Laundering program. Using artificial intelligence to predict illicit financial activities before they occur may sound like a noble pursuit, but this precrime framework carries Orwellian implications. A3ML casts an expansive surveillance net over ordinary citizens, scrutinizing their financial transactions for signs of wrongdoing. And as we all know, algorithms are far from infallible. They’re prone to bias, misinterpretation, and outright error, leaving individuals vulnerable to misidentification and false accusations. Consider the unsettling idea of being labeled a financial criminal because an algorithm misreads your spending habits. Soon, this won’t just be a hypothetical — it will be a reality.

Things are already bad enough.

Marc Andreessen, in a recent interview with Joe Rogan, highlighted the growing scourge of de-banking in America, where individuals sympathetic to Trump are unfairly targeted. This troubling trend underscores a larger issue: Algorithms, while often portrayed as impartial, are far from it. They’re engineered by humans, and in Silicon Valley, most of those humans lean left. Politically, the tide may be turning, but Silicon Valley remains dangerously blue, shaping systems that reflect its own ideological biases.

Without transparency and accountability, these systems risk evolving into even more potent tools of financial oppression, punishing innocent people and chipping away at the last shreds of trust in public institutions. Even worse, we could end up in a society where every purchase, every transaction, is treated like a potential red flag. In other words, a system eerily similar to China’s is looming — and it’s closer than most of us want to admit.

History’s lessons

These two programs align disturbingly well with DARPA’s history of domestic surveillance, most famously represented by the Total Information Awareness program. Launched after 9/11, TIA aimed to aggregate and analyze personal data on a massive scale, using everything from phone records to social media activity to predict potential terrorist threats. The program’s invasive methods sparked public outrage, leading to its official termination — though many believe its core technologies were quietly repurposed. This raises a critical question: How often do DARPA’s military-grade tools slip into civilian use, bypassing constitutional safeguards?

Too often, I suggest.

Who’s watching the watchers?

The implications of DARPA’s programs cannot be overstated. Operating under a dangerous degree of secrecy, the agency remains largely shielded from public scrutiny. This lack of transparency, combined with its sweeping technological ambitions, makes it nearly impossible to gauge the true extent of its activities or the safeguards — if any exist to prevent abuse.

We must ask how DARPA’s tools could be turned against the citizens they claim to protect. What mechanisms ensure that these technologies aren’t abused? Who holds DARPA accountable? Without strong oversight and clear ethical guidelines, the line between protecting the public and controlling it continues to blur.

Let’s hope someone in Donald Trump’s inner circle is paying attention — because the stakes couldn’t be higher.

DARPA is out of control.

Eyes everywhere: The AI surveillance state looms



Rapid advancements in artificial intelligence have produced extraordinary innovation, but they also raise significant concerns. Powerful AI systems may already be shaping our culture, identities, and reality. As technology continues to advance, we risk losing control over how these systems influence us. We must urgently consider AI’s growing role in manipulating society and recognize that we may already be vulnerable.

At a recent event at Princeton University, former Google CEO Eric Schmidt warned that society is unprepared for the profound changes AI will bring. Discussing his recent book, “Genesis: Artificial Intelligence, Hope, and the Human Spirit,” Schmidt said AI could reshape how individuals form their identities, threatening culture, autonomy, and democracy. He emphasized that “most people are not ready” for AI’s widespread impact and noted that governments and societal systems lack preparation for these challenges.

In countries already compromising privacy, AI’s proliferation could usher in an omnipotent state where freedoms become severely restricted.

Schmidt wasn’t just talking about potential military applications; he was talking about individuals’ incorporation of AI into their daily lives. He suggested that future generations could be influenced by AI systems acting as their closest companions.

“What if your best friend isn’t human?” Schmidt asked, highlighting how AI-driven entities could replace human relationships, especially for children. He warned that this interaction wouldn’t be passive but could actively shape a child’s worldview — potentially with a cultural or political bias. If these AI entities become embedded in daily life as educational tools, digital companions, or social media curators, they could wield unprecedented power to shape individual identity.

This idea echoes remarks made by OpenAI CEO Sam Altman in 2023, when he speculated about the potential for AI systems to control or manipulate content on platforms like Twitter (now X).

“How would we know if, like, on Twitter we were mostly having LLMs direct the … whatever’s flowing through that hive mind?” Altman asked, suggesting it might be impossible for users to detect whether the content they see — whether trending topics or newsfeed items — was curated by an AI system with an agenda.

He called this a “real danger,” underscoring AI’s capacity to subtly — and without detection — manipulate public discourse, choosing which stories and events gain attention and which remain buried.

Reshaping thought, amplifying outrage

The influence of AI is not limited to identity alone; it can also extend to the shaping of political and cultural landscapes.

In its 2019 edition of the Global Risks Report, the World Economic Forum emphasizes how mass data collection, advanced algorithms, and AI pose serious risks to individual autonomy. A section of the report warns how AI and algorithms can be used effectively to monitor and shape our behaviors, often without our knowledge or consent.

The report highlights that AI has the potential to create “new forms of conformity and micro-targeted persuasion,” pushing individuals toward specific political or cultural ideologies. As AI becomes more integrated into our daily lives, it could make individuals more susceptible to radicalization. Algorithms can identify emotionally vulnerable people, feeding them content tailored to manipulate their emotions and sway their opinions, potentially fueling division and extremism.

We have already seen the devastating impact of similar tactics in the realm of social media. In many cases, these platforms use AI to curate content that amplifies outrage, stoking polarization and undermining democratic processes. The potential for AI to further this trend — whether in influencing elections, radicalizing individuals, or suppressing dissent — represents a grave threat to the social fabric of modern democratic societies.

In more authoritarian settings, governments could use AI to tighten control by monitoring citizens’ every move. By tracking, analyzing, and predicting human actions, AI fosters an environment ripe for totalitarian regimes to grow.

In countries already compromising privacy, AI’s proliferation could usher in an omnipotent surveillance state where freedoms become severely restricted.

Navigating the AI frontier

As AI continues to advance at an unprecedented pace, we must remain vigilant. Society needs to address the growing potential for AI to influence culture, identity, and politics, ensuring that these technologies are not used for manipulation or control. Governments, tech companies, and civil society must work together to create strong ethical frameworks for AI development and deployment that are devoid of political agendas and instead embrace individual liberty and autonomy.

The challenges are complex, but the stakes are high. Schmidt, Altman, and others in the tech industry have raised alarms, and it is crucial that we heed their warnings before AI crosses an irreversible line. We need to establish global norms that safeguard privacy and autonomy, promoting transparency in how AI systems are used and ensuring that individuals retain agency over their own lives and beliefs.

Big Brother’s bigger brother: The Five Eyes’ war on your freedom



“Think of the children.”

Few phrases have been more effective at dismantling rights and silencing opposition. It’s the ultimate rhetorical Trojan horse, bypassing rational debate to smuggle in crippling, inhumane policies.

Historically, cries of “save the children” have been a powerful tool to drive moral panics that systematically erode civil liberties.

The Five Eyes alliance — an Orwellian pact of surveillance states spanning the U.S., U.K., Canada, Australia, and New Zealand — has perfected this tactic. Its latest campaign claims to protect children from harm. Don’t be fooled. The real goal is to invade every corner of your digital life. Marketed as a crackdown on platforms like TikTok and Discord, accused of radicalizing youth, these efforts pave the way for a surveillance system more destructive than anything seen before. Big Brother has a Bigger Brother.

Erasing encryption

Now, to be clear, TikTok is a serious problem. The app is a digital honey trap for the Chinese Communist Party, vacuuming up data and warping young minds with addictive content. But Beijing doesn’t have a monopoly on exploitation. The United States, alongside its Five Eyes allies, is quietly turning “protecting children” into a blunt instrument to crush dissent and invade every corner of your life. “Violent extremist content is more accessible, more digestible, and more impactful than ever before,” claims the Five Eyes initiative. This assertion may justify increasingly invasive measures under the pretext of preventing exposure to such content.

Which takes us to the heart of this initiative: a relentless assault on encryption — the very backbone of digital privacy. By undermining encryption, the alliance aims to tear down the barriers safeguarding your most sensitive information, from private conversations to financial records.

The push to weaken encryption has nothing to do with safety; it’s about control. Demolishing encryption protections doesn’t just expose Americans to government overreach; it also leaves them wide open to cybercriminals, identity thieves, and hostile foreign actors. And in a darkly ironic twist, it makes children — the very people these elites claim to be protecting — far more vulnerable to the same predators they claim to fight. Back doors in encryption don’t discriminate. They become open doors, waiting to be exploited by anyone who can breach them.

Learning from history

Historically, cries of “save the children” have been a powerful tool to drive moral panics that systematically erode civil liberties. In America, this tactic has repeatedly served as justification for policies that expand state power at the expense of individual freedoms. During the Red Scare of the 1950s, protecting children from communist indoctrination became a rallying point for sweeping censorship and loyalty oaths. Teachers were fired, school curriculums gutted, and free expression stifled — all in the name of shielding youth from so-called subversive ideas.

The Five Eyes’ latest initiative is nothing more than the same authoritarian playbook, updated for the digital age.

“The online environment allows minors to interact with adults and other minors, allowing them to view and distribute violent extremist content which further radicalises themselves and others,” it reads. This highlights the potential for mass monitoring of minors’ online activities, raising concerns about privacy and disproportionate responses. More troublingly, it sets the stage for invasive measures that target young people under the pretense of safety.

The emotional appeal of protecting youth is, yet again, being used to rally support for policies that concentrate power in the hands of the state. The pattern is unmistakable: Invoke fear, demand action, and chip away at freedoms in the process.

Same stuff, different decade.

The new scare

Today, it’s encryption in the crosshairs. Tomorrow, it could be the criminalization of dissent. Consider the language of the Five Eyes campaign, rife with vague terms like “malign actors” and “extremism.” These are not carefully defined threats but malleable excuses, broad enough to ensnare journalists, whistleblowers, or anyone daring to criticize those in power.

“Minors are increasingly normalising violent behaviour in online groups, including joking about carrying out terrorist attacks and creating violent extremist content.” The idea of monitoring and interpreting minors’ online jokes or behaviors could lead to punitive actions against young people for relatively harmless activities. Sharing a meme, for instance, could be misconstrued as evidence of radicalization, turning a harmless joke into a justification for invasive surveillance or even legal consequences.

The danger isn’t hypothetical. The United States already leads the world in invasive surveillance.

The initiative insists that a “renewed whole-of-society approach is required to address the issue of minors radicalising to violent extremism.” Such broad language could and should be interpreted as a mandate for expansive powers that infringe on individual rights and freedoms. This approach might involve mass data collection or enlisting private entities as de facto surveillance agents.

The danger isn’t hypothetical. The United States already leads the world in invasive surveillance. Think of the NSA’s PRISM program, exposed by Edward Snowden, which harvested Americans’ emails, messages, and browsing history under the flimsiest of legal pretexts. Weakening encryption will only supercharge this predation, turning every device into a surveillance tool.Yes, things are already dire — privacy is virtually nonexistent. But it can always get worse. The erosion of rights doesn’t happen all at once; it’s a slow, relentless grind, and complacency is its greatest ally.

America must push back against this descent. TikTok is not the only enemy. If the Five Eyes initiative succeeds, future generations will curse us for our cowardice.

John Bolton Asks Deep State To Deep-Six Trump Nominees Before They Fix Corrupt Intel Agencies

Warhawk and former National Security Adviser John Bolton asked the deep state on Wednesday to deep-six former Hawaii Democrat Rep. Tulsi Gabbard and former Florida Republican Rep. Matt Gaetz because their nominations to President-elect Donald Trump’s cabinet threaten the establishment. Trump tapped Gaetz to serve as attorney general and Gabbard to serve as director of […]

How your smart TVs are spying on you and your loved ones



Once, not that long ago, televisions were beloved devices that brought families together for regular rituals of laughter, drama, and storytelling. But today, as we settle in for a night of streaming on our sleek smart TVs, that warmth feels increasingly distant. These modern monstrosities offer endless options and voice-activated convenience, but this comes at a steep price. While we put our feet up and enjoy our favorite shows, we’re also inviting a level of surveillance into our homes that would have been unthinkable a few decades ago.

According to a new report by the Center for Digital Democracy, smart TVs have become yet another cog in a massive, data-driven machine. Specifically, this machine is an ecosystem that harvests viewer data with military-like precision, prioritizing profits over privacy, individual autonomy, and, arguably, our collective well-being.

Big Brother isn't just in your living room — he knows what you’re watching, what you’re thinking, what you’re buying, and even where you’re going.

A Trojan horse in disguise

As the report details, these devices function as sophisticated surveillance tools, tracking viewers' every move across platforms. From Tubi to Netflix to Disney+, streaming services rely heavily on various data collection mechanisms to fuel a relentless advertising engine. These companies boast about their ability to collect "billions of rows of data" on their viewers, using machine learning algorithms to personalize the entire experience — from what shows are recommended to the ads viewers are served.

Tools like Automatic Content Recognition — built into TVs by companies such as LG, Samsung, and Roku — track and analyze everything you watch. ACR collects data frame by frame, creating detailed viewer profiles that are then used for targeted advertising. These profiles can include information about the devices in your home and the content you purchase, all feeding into a continuous feedback loop for advertisers. The more you watch, the more the system learns about you — and the greater its ability to shape your choices. The “non-skippable” ads, personalized to reflect intimate knowledge about viewers' behaviors and vulnerabilities, are particularly disturbing. They are engineered to be as compelling and intrusive as possible.

Smart TVs are living up to their names. They know everything about you. And I mean absolutely everything.

Data-driven manipulation

The streaming industry has rapidly grown into one of the most lucrative advertising sectors, with streaming platforms like Disney+, Netflix, and Amazon Prime attracting billions in ad revenue. As the report warns, these platforms now use advanced generative AI and machine learning to produce thousands of hyper-targeted ads in seconds — ads for Mom, ads for Dad, and ads for the little ones. By employing tools like identity graphs, which compile data from across an individual’s digital footprint, streaming services can track and target viewers on their televisions and throughout their entire digital lives. That's right. Smart TVs seamlessly interact with other smart devices, basically "talking" to each other and sharing valuable gossip.

This data collection goes far beyond tracking viewing habits. The report reveals that companies like Experian and TransUnion have developed identifiers that encompass deeply personal details, such as health information, financial status, and political views. Who will you vote for in November? You already know — and so does your TV.

Crooked capitalism

At its core, capitalism has been a driving force of innovation, progress, and prosperity. Its brilliance lies in its ability to harness human creativity and ambition, rewarding those who bring value to the market. In its purest form, capitalism is entirely meritocratic. Capitalism has lifted millions out of poverty through competition and the pursuit of profit. Capitalism helped make America the greatest nation known to man.

However, we see today a gross distortion of capitalism’s core principles. Surveillance capitalism has taken the place of pure capitalism. Instead of fostering innovation, this monstrous model feeds off personal data, often without our knowledge or consent. It preys particularly on vulnerable groups like children, exploiting their behaviors and emotions to turn a profit. The same system that once championed freedom now thrives on violating privacy, reducing human experiences to commodities.

Smart TVs and surveillance capitalism go hand in hand.

This raises an urgent question: What can we do about it? While it’s tempting to grab a sledgehammer and smash your nosy device into a million pieces, more practical solutions exist.

Start by diving into your TV's settings and disabling data tracking features such as ACR. You can also refuse to sign up for accounts or services that require extensive data sharing. For those willing to pay a bit more, opting for ad-free services can limit the data collected on your viewing habits, though it’s not a foolproof solution.

Additionally, advocating for stronger regulations on data privacy and transparency in advertising technologies is crucial. As consumers, we need to push policymakers to implement stricter laws that hold companies accountable for the data they collect and how they use it. Organizations like the Center for Digital Democracy, which authored this important report, are already fighting for these changes. This is a matter of critical importance. Close to 80% of homes in the U.S. have a smart TV.

Big Brother isn't just in your living room — he knows what you’re watching, what you’re thinking, what you’re buying, and even where you’re going. Not for the sledgehammer, I hope.

Your car is SPYING on you — and it’s only going to get worse



If you thought your personal property was private, then you might not have read the agreement you signed when you purchased it — and this is especially true when it comes to your vehicle.

Car expert Lauren Fix is sounding the alarm, explaining that in the infrastructure bill of 2021, “there’s a kill switch law.”

“That kill switch law allowed them to listen in your car, to monitor your eyes, to literally track all of your information,” Fix tells Hilary Kennedy and Matthew Peterson of “Blaze News Tonight.”

“And what are they doing with that information?” She continues, “We know that manufacturers are hurting financially. We see a lot of cars sitting on lots, and as long as prices keep getting higher, their profit margins are shrinking.”


So they sell the data to places like insurance companies and the police department.

“Then, just recently Ford decided to create a patent that would sell all your information directly to the police department so that they wouldn’t have to go through some sort of contract,” Fix explains, adding, “Which, again, is a violation of our privacy and really infuriates me.”

While Americans are right to be infuriated, the problem is that they do disclose this information in the paperwork — which almost no one actually reads.

“It’s going to get worse because in 2026, all cars are going to have a kill switch in them,” Fix says. “That’s going to tell whether you’re under the influence of something by the start/stop button.”

“Is all of this perfectly legal?” Peterson asks, concerned.

“Well, it actually probably isn’t. But we sign those agreements,” Fix says.

Want more from 'Blaze News Tonight'?

To enjoy more provocative opinions, expert analysis, and breaking stories you won’t see anywhere else, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.