AI Censorship Targets People Who Read Primary Sources To Fact-Check The News
Artificial intelligence censorship tools are making sure you never read this article or share it with anyone it might persuade.
Technologies previously used against America's enemies are now being wielded against its citizens.
A new report from Just the News indicates that the Biden administration, in concert with Big Tech, academia, and big corporations, is pouring taxpayer money into an AI censorship program that allegedly utilizes systems once called upon to wage information warfare against the Islamic State.
According to one free speech watchdog, "Under the Biden Admin, the [National Science Foundation] is funding the idea that if citizen trust in government cannot be earned organically, then it must be installed by science."
In late 2022, the Intercept detailed efforts by the Department of Homeland Security to broaden its efforts to clamp down on free speech and shape online discourse, far in excess of the promises of the aborted "Disinformation Governance Board."
Documents obtained through leaks and lawsuits revealed that government agencies were working together to "mature a whole-of-government approach to mitigating risks of [mal-information], framing which tools, authorities, and interventions are appropriate to the threats impacting the information environment."
The DHS reportedly justified these curbs on speech and decisions on what information people should be permitted to engage with by suggesting that terrorist threats could be "exacerbated by misinformation and disinformation spread online."
It has become abundantly clear that other statist elements, sometimes aligned, have similarly engaged in censorship and narrative seeding, particularly after Elon Musk's "Twitter Files" revealed that federal operatives pressured private companies into censoring journalists, dissenters, and even a former president.
The impact may have been electoral as much as it has been social.
FBI agents reportedly leaned on at least one social media giant to prevent a now-confirmed story injurious to then-candidate Joe Biden's election chances from circulating.
These energy-intensive efforts to police speech and manage narratives could apparently do with an upgrade.
The National Science Foundation has conferred millions of dollars in taxpayer money in the form of grants to universities and private firms so that they can develop censorship tools.
Just the News reported that these tools in many ways resemble those developed by the Defense Advanced Research Projects Agency in its Social Media in Strategic Communications program, which lasted from 2011 to 2017.
These tools were intended to "help identify misinformation or deception campaigns and counter them with truthful information, reducing adversaries' ability to manipulate events."
To accomplish this, DARPA noted that SMISC "will focus research on linguistic cues, patterns of information flow and detection of sentiment or opinion in information generated and spread through social media. Researchers will also attempt to track ideas and concepts to analyze patterns and cultural narratives."
Rand Waltzman, program manager at DARPA at the time SMISC was launched, understood that the "effective use of social media has the potential to help the Armed Forces better understand the environment in which it operates and to allow more agile use of information in support of operations."
"We must eliminate our current reliance on a combination of luck and unsophisticated manual methods by using systematic automated and semi‐automated human operator support to detect, classify, measure, track and influence events in social media at data scale and in a timely fashion," he added.
In hopes of advancing military ends and achieving greater control over narratives communicated in virtual realms, Waltzman established four goals for the program:
Extra to developing technologies to better mine opinion and track memes, SMISC researchers endeavored to find better ways to automate content generation, weaponize bots in social media, and crowdsource.
Mike Benz, executive director at censorship watchdog Foundation for Freedom Online (FFO), explained to Just the News that "DARPA's been funding an AI network using the science of social media mapping dating back to at least 2011-2012, during the Arab Spring abroad and during the Occupy Wall Street movement here at home."
"They then bolstered it during the time of ISIS to identify homegrown ISIS threats in 2014-2015," he added.
This same technology is now allegedly being used to target those "wary of potential adverse effects from the COVID-19 vaccine and those skeptical of recent U.S. election results."
Benz previously reported that during the 2020 election cycle alone, the DHS and its "speech control partners" had:
According to Benz, the shift from targeting foreign entities to Americans largely occurred after FBI Director Robert Mueller failed to find "collusion" between former President Donald Trump and outside Russians.
In the so-called "Foreign-To-Domestic Disinformation Switcheroo," federal agencies turned their attention from "foreign disinformation" to "domestic disinformation," some allegedly suggesting that the latter posed a significant threat against American infrastructure.
Now, with their attention apparently directed inward, there is a push for greater censorship capability.
The FFO reported that since the start of the Biden administration, the National Science Foundation has spent nearly $40 million on government grants and contracts primarily through its Convergence Accelerator to combat “misinformation."
Additionally, “64 NSF grants totaling $31.8 million were given to 42 different colleges and universities to research the science of stopping viral ideas,” with some grants "explicitly target[ting] 'populist politicians' and 'populist communications' to scientifically determine 'how best to counter populist narratives.'"
The State University of New York, George Washington University, and the University of Wisconsin are among the schools that have received grants for this research.
“One of the most disturbing aspects of the Convergence Accelerator Track F domestic censorship projects is how similar they are to military-grade social media network censorship and monitoring tools developed by the Pentagon for the counterinsurgency and counterterrorism contexts abroad," he said.
This accelerator was reportedly initially intended to tackle challenges like quantum technology, but its infrastructure has been allegedly reoriented to tackle free speech.
"And so they created a new track called the track F program ... and it's for 'trust and authenticity,' but what that means is, and what it's a code word for is, if trust in the government or trust in the media cannot be earned, it must be installed," said Benz. "And so they are funding artificial intelligence, censorship capacities, to censor people who distrust government or media."
Benz summarized what this all means: "The government says, 'Okay, we've established this normative foothold in it being okay to [censor political speech], now we're going to supercharge you guys with all sorts of DARPA military grade censorship, weaponry, so that you can now take what you've achieved in the censorship space and scale it to the level of a U.S. counterinsurgency operation.'"
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!