If Zuckerberg Truly Regrets His 2020 Censorship, Facebook Should Stop Smothering Free Speech

It’s hard to believe a man who cannot fully admit that his actions did benefit Democrats over the democratic process, and who continues blocking truth-tellers on the very day he released his confession.

U.S. Government Funded ‘Human-AI Teaming’ Research Monitoring ‘Social Media Messages’ During Covid Lockdowns

Rapidly interpreting and responding to what people say online could provide a critical resource in disaster relief, but the feds could also use it to haunt you.

6 Takeaways From The Biden Admin’s Court Quest To Keep Censoring Americans Online

In this major case likely to hit the U.S. Supreme Court, the Biden administration is fighting to stop American citizens from sharing messages government officials don't like.

DOD Pays Media-Rating Scammers To Create Propaganda And Censorship Tech For U.S. Government

Our tax dollars are funding the development of media-monitoring products that can be deployed to censor ordinary Americans via Big Tech.

Should SCOTUS Dismiss The Consequential Section 230 Case, Gonzales v. Google?

In light of the stakes involved and strange litigation moves, the court should dismiss the consequential Section 230 case.

Google’s Solution To Its Political Campaign Email Problem Is A Phony Fix

It’s clear that Silicon Valley is once again placing its thumb on the scale, manipulating communications that could lead to consequential outcomes.

A Conservative Answer To The Free Speech Dilemma Posed By Woke Capital

On this episode of the Federalist Radio Hour, Kyle Sammin explains how big business has become more of a threat to free speech than big government.

Facebook and Twitter take new actions to censor 'hate speech'



Big Tech companies Facebook and Twitter are developing plans to further automate identification of posts that violate rules against what they deem is "hate speech."

Twitter on Wednesday announced an expansion of its hateful conduct policy to ban language that "dehumanizes people on the basis of race, ethnicity or national origin." The company will also "continue to surface potentially violative content through proactive detection and automation."

Tweets that violate the rules will be removed by Twitter if reported and users who repeatedly have their tweets reported may have their accounts temporarily locked or permanently suspended.

Examples of tweets that could be banned provided by Twitter include, "All [national origin] are cockroaches who live off of welfare benefits and need to be taken away," or, "[Religious Group] should be punished. We are not doing enough to rid us of those filthy animals."

Twitter also announced efforts taken to ensure that its rules are enforced consistently.

"Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better prepared when reviewing report," the company said.

At the same time, Facebook is reportedly about to conduct a "major overhaul of its algorithms that detect hate speech," according to internal documents reviewed by the Washington Post. The Post reported that Facebook will reverse its "race-blind" enforcement of rules against hate speech, policing hate speech against blacks, Muslims, multi-racial people, LGBT-identifying people, and Jews more aggressively than anti-white hate speech.

This effort, called the WoW project, will reportedly rework Facebook's algorithms to improve detection and deletion of "the worst of the worst" hate speech rules violations. According to the Post, Facebook would assign numerical scores to certain kinds of posts, weighing them by perceived harm. The algorithm will prioritize posts with heavier weighted scores.

For example, a post that says "gay people are disgusting" will have higher priority for Facebook's algorithms than a post that says "men are pigs."

The Post reported that Facebook has already changed its algorithms to de-prioritize policing comments that refer to "whites," "men," and "Americans." While posts attacking these groups are still considered "hate speech," Facebook's technology treats such posts as "low-sensitivity," or less likely to be perceived as harmful or abusive. Facebook's algorithms no longer automatically delete such posts.

"We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious," Facebook spokeswoman Sally Aldous told the Post. "Over the past year, we've also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned Holocaust denial."

Social media companies of late have been heavily criticized by both the left and the right over the enforcement of their rules against hate speech. Conservatives and right-leaning media organizations have accused Big Tech companies of exercising censorship of posts they disagree with while left-leaning groups say social media platforms are not doing enough to police hate speech and stop the spread of fake news or misinformation.

Civil rights activists who gave statements to The Hill praised Facebook and Twitter for "progress" in removing hate speech but expressed skepticism that the changes will meet their demands.

"This is progress, but Twitter demonstrated a consequential lack of urgency in implementing the updated policy before the most fraught election cycle in modern history, despite repeated warnings by civil rights advocates and human rights organizations," said Arisha Hatch, the vice president of Color of Change.

Hatch accused Twitter of having a "non-committal and cavalier attitude toward transparency" by failing to explain how content moderators are trained or how Twitter's artificial intelligence identify offending posts.

"The jury is still out for a company with a spotty track record of policy implementation and enforcing its rules with far-right extremist users. Void of hard evidence the company will follow through, this announcement will fall into a growing category of too little, too late PR stunt offerings," Hatch said of Twitter.

Hatch called Facebook's reported plans to change its algorithms "confirmation of what we've been demanding for years, an enforcement regime that takes power and historical dynamics into account."

Sum of Us, an anti-corporate advocacy group, told the Hill that Facebook's proposed changes do not go far enough to police hate speech.

"Facebook is well aware of the harm it causes by allowing some of the most vile content to be promoted through its algorithms. Their latest move to more aggressively police anti-Black hate speech and false claims about COVID-19 vaccines shows that they have the ability to clean up their act — if they want to," Sum of Us executive director Emma Ruby-Sachs said.