Commentary: Twitter pressured to ban the word ‘groomer’ but not actual grooming



The debate about how the word “groomer” is used on Twitter is heating up. Some have called for the platform to ban the use of the word altogether.

I understand that there are individuals who don’t prefer the word or how it’s used, but we should be asking why child abusers seem to have more protections on Twitter than minor victims of grooming and sexual abuse. While a social media platform is prioritizing reports about specific words and policing how words are used, some of the most egregious reports of crimes against children are not being addressed. Actual grooming and child sexual exploitation are happening on Twitter. If the platform is going to focus on anything, it should be the complete removal of child sexual exploitation at scale. This issue should be the platform's top priority.

Sextortion

On Friday, July 22, 2022, Matthew K. Walsh, age 24, of Baltimore, Maryland, pleaded guilty to sexual exploitation of a minor in order to produce child pornography. Specifically, Walsh admitted that he created fictitious online profiles purporting to be a minor female to contact and induce minor males between the ages of 12 and 17 to send sexually explicit images and videos to the individual they believed to be a minor female, but who was, in fact, Walsh.

Walsh victimized at least 40 minor males, according to the U.S. attorney's office. He distributed their pictures and videos, uploaded them to Twitter accounts, and sold the material to others, communicating with at least 50 other Twitter users about purchasing the material. Walsh apparently had at least 22 Twitter accounts.

Walsh then used the material obtained from minor males to blackmail and coerce them into doing more.

Child sex abuse operations

Walsh is just one example of a grooming predator running rampant on Twitter. Another recent case involved a child abuser who sometimes groomed children on Twitter, sexually abused them, documented the abuse, and posted the imagery on the main feed of his Twitter account. He used the platform for four years and sold child sexual abuse material to his 290,000 followers. Unfortunately, this situation is not the first of its kind. These types of operations keep popping up on Twitter globally.

Prioritize reports

When a report of child sexual exploitation is made on Twitter, it should be handled quickly and correctly. In the case of John Doe #1 and John Doe #2, we saw a clear example of what a platform should not do. The minor males had their child sexual exploitation material posted to Twitter. The video racked up over 167,000 views and 2,223 retweets. Both minors were 13 years old in the video. When Twitter finally answered their report to remove the video, Twitter said, “Thanks for reaching out. We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time." The Department of Homeland Security had to step in to have the video removed.

Twitter is one of many platforms facing this issue, but instead of asking these companies to ban conversations about grooming, I’d like to ask how we can expand that conversation. Parents and caregivers definitely need to speak to their children about internet safety, especially the risk of sextortion. The FBI has tried very hard to get the word out that this crime is getting increasingly worse, but the message isn’t reaching parents quickly enough. I’d like to see anyone with a platform educating about these issues, including faith-based communities, schools, and the corporate press. If Tucker Carlson did a segment on sextortion, it would reach millions with the information they need to protect their children and communities.

We are at crisis levels of grooming and exploitation of our children, especially online. Unfortunately, in a heartbreaking story recently, one victim of sextortion committed suicide. Instead of urging platforms to remove non-violent speech, we should be encouraging them to stand by their own terms of service and actually remove child sexual exploitation at scale. https://help.twitter.com/en/rules-and-policies/sexual-exploitation-policy I believe a completely free-speech version of Twitter would allow the platform's reporting system to prioritize the most egregious reports.

It’s common for people to see this conversation and request that the government step in. I see that as one of the most dangerous solutions, because then we run the risk of government overreach and a loss of digital privacy rights. Mass surveillance is never a great solution. Banning the word "groomer" isn’t the correct place to start this conversation either, as it doesn't solve the problem and risks punishing advocates and survivors. The best solutions are to educate your families, raise awareness, and apply pressure on the platforms to innovate and make impactful changes to address the removal of child sexual exploitation.

Telegraph Reviewer Defends Netflix's Cuties Against 'An Age Terrified of Child Sexuality'

The notionally conservative Telegraph has published a review defending Netflix's Cuties against "an age terrified of child sexuality".