Goodbye, anons? Radical transparency is about to upend the internet



In June, Texas Patriot, a prominent anonymous account supportive of President Donald Trump, announced during the height of tensions with Iran:

F**k it. If Trump takes us to war, I’m done with him and his administration.
I voted for:
NO WARS
No taxes
Cheap gas
Cheap groceries
MAHA.
What of these things has actually happened?
I’m pissed.

This message from a popular pro-Trump account seemed significant. Was Trump’s populist base turning on him?

In our current world, however, where plausible fake engagement can be created at an almost limitless scale, true anons will lose a great deal of their power.

But shortly thereafter, Right Angle News, another popular anonymous account, asserted that Texas Patriot was actually based in Pakistan. Yet another popular anon account contested this, saying that Texas Patriot is really an American originally from Texas who now lives in Georgia. Notably, most other major accounts weighing in on the controversy, from Proud Elephant to Evil Texan, are themselves anonymous, adding further to the hall of mirrors.

Either way, Texas Patriot deleted its own account shortly thereafter, perhaps suggesting that he or she had something to hide — or at least didn’t want the scrutiny.

The question of whether Texas Patriot is, in fact, a patriot from Texas or a bad actor in Islamabad is ultimately beside the point. As Newsweek wrote of the incident:

Social media has proved useful for galvanizing the MAGA movement, with popular accounts often reacting to political developments from Trump’s feud with X owner Elon Musk to Trump’s policy agenda. If it emerged that an account alleged to be American was actually based in another country, it would impact users’ trust.

And such trust is rapidly eroding, which will accelerate as ever more sophisticated fake accounts and bot farms are exposed.

The incident was just one of many in which major social media accounts were discovered — or at least suggested — to be run by someone far different from who they were purported to be. And it previews a shift that is just now beginning, which will fundamentally change how we interact with social media content.

Bots indistinguishable from humans

When it comes to who will rule social media, the age of the anon is ending. The age of radical transparency is beginning — and yet, if designed well, radical transparency can still include a substantial and valuable space for a large degree of online anonymity.

Several reasons explain the shift. Increasingly sophisticated artificial intelligence models and bots generate outputs that, in many cases, are already almost indistinguishable from humans. For most users, they will soon become fully indistinguishable (a fact confirmed by multiple studies that have shown that most people have a poor ability to tell the difference between the two). And almost certainly, bots guided with even a minimum of human interaction will become indistinguishable from actual humans.

Many of my best friends have had anon accounts. A few are still prominent anons. It’s also noteworthy that almost every prominent ex-anon I know personally, whether doxxed or self-outed, dramatically improved their profile and professional opportunities once they were no longer anonymous.

I am not anti-anon, however. I understand why some people, especially those expressing opinions well outside of the mainstream, need to be anonymous. I also acknowledge that anonymity has been a crucial part of the American political tradition since the revolutionary era. An internet that banned anons would be an internet that is much poorer. This is why the biggest current anon accounts will be grandfathered into the coming system of radical transparency, as they have actual operators who are known to enough people that they are recognized as genuine.

I know several big anon accounts like this. I don’t know who is running them, but I have multiple offline friends I trust who do know the account holders and vouch for them. Accounts of this kind, with credible, real-world validation, will continue to have influence. But increasingly, new big anon accounts will be ignored, even if they amass a large number of followers (many of whom are fake).

As these ersatz accounts become increasingly sophisticated every day, engaging with the truly real becomes ever more important. Fake videos and photos proliferating on social media merely add to the potential for deception.

Age of radical transparency

Even accounts run by real people will not be immune to the age of radical transparency. Some are partially or wholly automated — a way for a “content creator” to maintain a cheap 24-hour revenue stream. In the future, if you want to have influence, mechanisms will be in place to prove not only that it is you who are posting but that you are posting content that is authentic, with a proven real-world point of origin. Some have even suggested using the blockchain as a method of validation.

There should be a simple way of blocking the worst AI slop accounts, foreign bad actors who post highly packaged clickbait, or those who shamelessly steal content made by others. Most Americans would probably prefer not to engage with unverified foreign accounts when discussing U.S. politics. Certainly, I would be willing to pay for a feed that only showed me real, verified accounts from America, along with a limited list of paid, verified, and non-anonymous accounts from other parts of the world.

I am interested in having discussions with real people about real content and the real opinions they have. I want accounts mercilessly downrated if they produce inauthentic content presented as real. I want accounts downrated that regularly retweet unverified slop. If X, or any other online platform, can’t consistently provide that, I’ll look elsewhere — and so will many others.

Anonymity breeds toxicity

My desire for authenticity is not a left-wing attempt to police “disinformation” — that is, whatever the left doesn’t want said. It’s far more serious. It’s not about getting “true” facts but a feed that is filled with actual people producing their own content representing their own views — with clear links to the sources for their claims.

Anonymity has, naturally, always been accompanied by a slew of problems: It can lead to echo chambers or aggressive exchanges, as users feel less pressure to engage rationally.

The lack of personal stakes can escalate conflict, which is amplified by AI. Modern AI can generate thousands of unique, human-like posts in seconds, overwhelming feeds with propaganda or fake news. The increasing influence of state actors in this fake news ecosystem makes it even riskier.

RELATED: Slop and spam, bots and scams: Can personalized algorithms fix the internet?

Vertigo3d via iStock/Getty Images

Anonymity also emboldens individuals to act without fear of repercussions, which often has downsides. The online disinhibition effect, a psychological phenomenon first described by psychologist John Suler in 2004, suggests that anonymity reduces social inhibitions, leading to behaviors individuals might avoid in face-to-face settings.

Everyone has met the toxic anon online personality who turns out to be quite meek and agreeable in person. One friend of mine who had an edgy online persona eventually closed her anon account (with tens of thousands of followers) and recreated her online presence from scratch as a “face” account. Her tweets are no longer as fun or spicy as they had been, but her persona is real — and presents who she really is. And she eventually landed a great public-facing job, partly based on the quality of her tweets.

Dwindling era of anon accounts

Anons could play a leading role in the old social media world where bots were mostly obvious, and meaningful provocations were, in large part, created by real people through anonymous accounts. In our current world, however, where plausible fake engagement can be created on an almost limitless scale, true anons will lose a great deal of their power. They will be replaced as top influencers by those who are willing to be radically transparent.

Truly transparent identities should include verifiable information, such as email addresses, phone numbers, or government-issued IDs for account creation. While such information does not need to be publicly shared, it should be given to the social media company connected to the account.

Raising the barrier for AI-driven impersonation, while not foolproof, deters malicious actors, who must invest significant resources to create credible fake identities.

For anons unwilling to trust their private information to one of the major online platforms, third-party identity verifiers dedicated to protecting user privacy could carefully validate their identities while keeping them anonymous from social media companies. Such third-party brokers themselves would have their prestige checked by the accuracy of their verification procedures. This method would still allow for a high degree of public anonymity, bolstered by a backend that guarantees authenticity.

A new internet age

In the future, pure online anonymity will not be banned — nor should it be. But in the coming age of radical transparency, a truly anonymous account — one whose owner’s real-world identity is neither known within i own trusted circles nor verified by a reliable third party — will have little to no value.

The next internet age will value not just what you say, but more importantly, that others know you are the one who is saying it.

Editor’s note: A version of this article appeared originally in The American Mind.

Slop and spam, bots and scams: Can personalized algorithms fix the internet?



From the super-spam Google search results loaded with videos instead of web pages to the “paid for by” advertisements heavy in social media feeds these days, it’s hard not to notice the internet morphing into … well, some call it slop (others use another four-letter word). Whatever your taste, or lack thereof, AI is sure to play a major, transformative role. Offsetting the massive and justified concerns are several palliative possibilities for the preservation of our humanity online — one of them in consideration is the so-called individual or customized algorithm.

This is, in essence, a filter on the internet or in parts of it, such as particular websites, whereby you, an AI bot, or another entity (perhaps the operator of certain sites and apps) uses the overlay to curate your feed.

As an example, you’re scrolling the X.com timeline and decide you actually do value, say, the political takes of your ideological enemy but have no interest at all in connecting with or understanding various factions within your own presumed ideology. In terms relative to the “discourse,” it’s sort of a nuanced position. An algorithm tailored to enhance your predilections may be an option. Doesn’t exactly sound like the “town hall of the internet,” much less the “global public square,” but it might keep users engaged, and it might be useful for certain types of searches, engagements, and analysis.

Even as the Trump administration works day and night to unravel decades of graft, fraud, and frankly traitorous activity at society’s many levels, what exactly do we want and need out of the internet so we can thrive?

Continuing with the X.com hypothetical, perhaps the programmers under Elon could, and this is the thrust of the issue, decide to allow for the application of various user-determined control parameters onto your feed, such that it weeds out what you want to ignore and gathers more of what you have determined you value. Seems straightforward, right? Why not roll it out and offer it as a subscriber add-on? Even if it’s not entirely customized, it’s getting close.

There are cost barriers and security considerations. Aren’t there always such barriers, though? Programming, maintaining, and monitoring such tailored algorithms and similar individualization is heavy on the compute. Compute requires energy, which requires money. The relative homogeneity of websites allows for economic, efficient computation — but doesn’t it also work to homogenize us, our desires, and perspectives?

RELATED: Liberal comedian lashes out at Netflix over Dave Chappelle special: 'F*** you and your amoral algorithm cult!'

This seems to be the battleground we find on ourselves on now.

The other major obstacle from the point of view of the internet proprietor relates to the opportunity for scams that might arise if users are granted these tools of curation. The argument is that if individuals are granted or otherwise obtain (perhaps via AI) the technological tools to curate their own feeds more deeply than they do now, those same tools will open opportunities for scams of various sorts.

One such argument points to the use of AI-assisted algorithms deployed into a context like X.com with the objective of gathering intelligence, data, and so forth — to be leveraged later in some separate context? This happens already, as we all know, but evidently supercharging these efforts opens yet more vulnerability online. Or so the argument goes. So it’s hard to say with any certainty how effective or useful or desirable the option for individualized algorithms will be in the absolute aggregate. Does it matter? Well, at a spiritual level, maybe not. However, at immediate survival, social, and viably employable levels of concern, yes, the internet absolutely still matters a great deal. For most people, just walking away isn’t an option.

And so the question many people are asking, even as the Trump administration works day and night to unravel decades of graft, fraud, and frankly traitorous activity at society’s many levels, is what exactly do we want and need out of the internet so we can thrive? It’s going to be more than pure market logic. How can we wrangle this thing to serve everyday Americans, or even mankind, while we’re at it?

Let me offer two basic predictions. One, the internet will continue with the logic of homogenization, of monoculture, which appears to describe and define most of corporate culture, and as a result, the internet may likely stratify more than splinter. Individual algorithms will pass away as just another stab in the dark of cyberspace exploration. Two, the homogenization will nevertheless finally become unprofitable — at least to the point where, beyond pure market operations, some of the more enjoyable and human operations will open up. Perhaps individualized algorithms wind up functioning as an effective stopgap, a Band-Aid, until we can get bigger medicine — wisdom — involved.

DOD ‘Social Engineering’ Program Developed Bots Capable Of Psychological Warfare

The Department of Defense has allocated millions of dollars to create phony social media profiles the government could turn against Americans.

Why universal basic income is a Trojan horse for globalist control over free citizens



To most people following the story, UBI means universal basic income. The proposal, which has floated around under different names since antiquity, took shape in its modern incarnation as a project mainly pushed by British intellectuals favoring (at a minimum) some kind of collectivist floor to capitalist society.

Today, this sort of welfare arrangement is more closely associated with tech and tech-adjacent people who see progress in automation as inevitable and/or highly desirable. However, it is also costly because it adversely impacts the relevance or use of most human beings.

A lot of people laugh at satanism and even the idea of Satan, but there’s a reason the devil has stuck around in our consciousness to this very day.

It is no surprise that the arc of utopian Anglo thinking would end up here. Communism, as formulated by the functionally Anglo Marx and Engels, looked forward to a time when all people became industrially free to toggle among whatever pursuits they preferred whenever they cared to do so.

It is but a small leap to posit that the only real path to realizing this utopian collective is for a special class of super-capitalists to build the only kind of industry that could theoretically liberate everyone from the need for work or, indeed, any economic valuation.

That agenda (and the worldview behind it) seems very difficult to reconcile or harmonize with Christianity — for many reasons, but perhaps above all because it dramatically encourages looking to the machinery of utopian collectivism (and the people behind it) as the source of all goodness, salvation, and creative power rather than to the Lord of all creation, the triune God.

All too predictably, it’s now increasingly fashionable and high-status for AI researchers and technicians to baldly proclaim that they’re building a god to be worshiped as the one true transformer of all people out of their given human form. This is a god that destroys the Christian God by destroying the crown jewel of His creation, the human being.

Of course, we’re told, this is a good thing, actually, because what comes next for us is beyond our wildest dreams — in other words, we’re about to become gods, too, and it will be like nothing anyone has imagined.

This promise will carry the sting of especially diabolical heresy to those familiar with the millennia-old sacred Christian tradition of theosis, the concept and (highly laborious) practice of working to achieve union with God eventually. That tradition, taught carefully by the Church, has emphasized that the greatest of spiritual risks and harms come from trying to shortcut or speed-run theosis, properly understood as the reunion desired for us all by the immeasurably loving God who created us. The path toward theosis is marked and defined by the utmost patience, humility, discipline, and self-denial — not by (for example) maximizing “mind-blowing” inventions that make it ever easier for people to experience ecstasies and produce fantasies.

In sum, the best and oldest Christian teachings have warned the most against what is being pitched to us most aggressively as humanity's ultimate universal achievement.

Notably, this warning has great power because it doesn’t order us to stop making advanced tools or using them simply. Its counsel is more difficult and more spiritually purifying. It’s to recognize that the temptation to usurp and replace God is so difficult to resist that our best efforts are doomed to failure without an utterly humble and absolute reliance on God and trust in Him — a round-the-clock watchfulness wherein we focus on stopping temptations at the spiritual door to our hearts before they can get in, take hold, and grow.

All this deep and needful wisdom seems to be entirely lost on the loudest and most prominent advocates of universal basic income today, who are really advocating it because it helps accelerate us toward universal bot idolatry.

Beneath the hype, advocates struggle to ignore the fact that even the most extraordinary machines are only means to ends outside and beyond them. All machines, all tools, are for something, and the existence and development of these useful devices always ultimately depend on a creator exercising some kind of discernment, judgment, and, it must be concluded, worship.

As bleeding-edge technologists increasingly recognize that theology and worship are inescapable no matter how radically machine-making evolves, they must inevitably come to realize that one’s own tool — one’s own creation — can never be one’s god. If you think you’re worshiping tech for tech’s sake, you are deluded; you’re actually serving some other idol, some other facet of God broken off and falsely elevated to spiritually sovereign status.

A lot of people laugh at satanism and even the idea of Satan, but there’s a reason the devil has stuck around in our consciousness to this very day. And a lot of people are about to relearn why.

Proving you're human online doesn't require a credit card



Elon Musk went viral last year with tweets suggesting that the future of social media is to pay for it, because otherwise, it’ll all just be bots. This problem of how to prove you're human and not a bot is only getting worse.

— (@)


— (@)

Musk is right: The bots are coming, and we do have to do something. But we have two more options besides the one he offers. Here’s the expanded list of our choices for proving our humanity in the age of ubiquitous AI:

  1. Pay-to-play (i.e., the Musk option)
  2. Web3’s large selection of existing proof-of-human offerings
  3. Government-issued digital ID

I’m guessing we’ll end up with some combination of the above, but insofar as number 3 will be the most attractive option for many people, it’s yet another way in which AI is an inherently centralizing force. But as a decentralization maxi, the likelihood that we’ll end up where the government can digitally unperson me with a mouse click is concerning. If this worries you as well, then read on.

In this piece, I’ll give a very brief introduction to option number 2 — web3 proof-of-human services — for the non-crypto-pilled. Yes, I know, I know — nobody wants to read another piece about how “web3 fixes this.” But don’t close that tab!

Even if you hate crypto, it’s still worth acquainting yourself with just how much effort and money has gone into solving the precise problem Musk is worried about. Here are two relevant facts for the crypto-haters to consider before they bounce:

  1. Digital identity is a critical, well-established front in the “centralization vs. decentralization” war. So, if you care about this fight, then this issue matters.
  2. Recent advances in AI have fundamentally changed the digital ID terrain so that web 2.0 now has a problem that had previously been confined to web3 — i.e., how to do proof-of-human in a network where human nodes can be credibly impersonated at scale and at low marginal cost.

Proof-of-human is an early, fundamental web3 problem

One of the core distinctions crypto has vs. the traditional web is ubiquitous pseudonymity. Crypto types are super into the whole pseudonymous online persona thing.

Now, you may not care about pseudonymity, or you think it’s only for money launderers, dope peddlers, bootleggers, and prank callers. I get it, Boomer. But just bear with me for a moment because I promise I’m not trying to pseud-pill you — I’m just trying to help you understand why proof-of-human is such a longstanding web3 concern.

The de facto standard for identity on the current web (web2) is the email address plus password combination. To sign up for a new service, you usually supply these two items, and then you get a confirmation link in your email that you have to click to prove you’re the rightful owner of that email address.

The standard for identity on web3, in contrast, is the crypto address. This is a public address on a public blockchain — often Ethereum — that you have a private key for and can, therefore, prove ownership of.

Web3 identities, then, have the following qualities:

  • Trivial and cost-free for a single person to create and use in bulk
  • Impossible to link to a single person, company, or other entity
  • Used for accounts on internet services that are web3-based
  • Used for moving valuable assets around

You might say that in web3, every phone is a burner phone — there is no other kind. This is because it’s really easy to create new crypto addresses and use them as identities. You can do this locally by just creating a new public/private key pair in the correct format, and if you want to send some asset to that address (coins, NFTs), you can do that by interacting with the blockchain.

Obviously, this is a pretty treacherous combination of qualities that’s quite easy to abuse, even without any sort of advanced AI. If logging in to a web3-based service only requires a locally generated key pair, then a single, not very sophisticated person could spin up millions of these public/private key pairs on a laptop and use them to SPAM thousands of web3 applications with fake interactions using a few simple scripts. For instance, you could use this to manipulate DAO votes or abuse token-gated applications.

The point: At the very beginning of web3’s existence, the frictionless ease of essentially disposable bulk identity creation has meant that web3 services have had the very proof-of-human problems that are only now truly catching up to web2 in the AI era.

The web3 solutions

If you google “web3 proof of humanity,” you’ll get a ton of results. Everyone has ideas about how to do this, and many of the ideas are very good and practical.

In addition, there are some web3 projects I’ve seen that have their own built-in solution for this that you use to access the service or community, and there are other web3 efforts where proof-of-human sort of happens as a side effect (POAP is a good example of the latter, and I think STEPN may be another in that proof-of-workout equals proof-of-humanity).

If anything, web3’s problem is that there are too many solutions for the PoH problem, and no one has settled on a standard. You’ll notice in the above list that there’s basically a marketplace for proof-of-human services that many web3 hustlers are hoping to dominate with their own solution.

Here’s a very brief list of some approaches:

  • Scan your eyeball data into a creepy orb (i.e., WorldCoin).
  • Multiple humans meet in person and give each other NFTs that essentially say, “I did an IRL thing with the person who controls this wallet.”
  • Users upload videos of themselves answering questions or doing some required task.
  • Users take cognitive tests that are still too hard for AIs.
  • Users either vouch for or challenge each others’ humanity.
  • The platform analyzes your social graph on some network and uses that as proof.
  • The platform looks at your wallet for NFT credentials that it recognizes as normally only given out to real humans for doing a thing in the real world — e.g., an on-chain certificate granted by an institution or program, or an earned community participation token or status badge from an established web3 community.

None of these are scam-proof by themselves, so most PoH offerings will combine multiple approaches to give you some kind of score. But just to be clear on what this list is: These aren’t random ideas or shower thoughts that I or someone else thought might be kinda cool if only someone were to build it — no, there are (or, in some cases, have been) actual shipping products built around these ideas and more, some of them with thousands of users. This stuff literally exists courtesy of the now-busted (but steadily reflating) crypto bubble, and actual communities are testing it.

Again, the problem is the sheer variety of such PoH efforts and the lack of a clear standard or authority. If there were a kind of “LinkedIn” but for PoH (maybe LinkedIn itself could do this), where if you worked at a job with colleagues, you got an on-chain badge that says, “Jon Stokes definitely worked here doing this thing,” that would probably dominate. But there is no such Schelling point — yet.

We’ll probably go with option 3

I can already hear many of you asking, “Couldn’t we do all the ‘web3’ stuff you’re describing with a government-issued digital identity?” (I.e., option number 3 on the list in the first section of this piece.)

The answer is, “Yes, obviously.” And there are a number of country-level efforts to do exactly this, some of which involve the blockchain and some of which do not.

As I said in the intro, the people who want the government to handle this for them will probably get their way, eventually. But it should be clear that it doesn’t have to be this way.

We have a multitude of options for proof-of-human that don’t involve paying centralized service providers, whether private-sector platforms like X via subscription fees or governments via taxes. We should use them if we value our privacy and freedom.

And if we do decide on pay-to-play, there are privacy-preserving options like Bitcoin (either L1 or lightning network) that could be used by social media to filter for bots without wrecking pseudonymity.

Musk announces temporary limits on number of tweets users can read



As thousands of Twitter users reported issues accessing the platform Saturday morning, Elon Musk announced temporary limitations on the number of tweets users would be able to read.

"To address extreme levels of data scraping and system manipulation, we've applied the follow temporary limits," Musk tweeted Saturday.

The limits, which rose steadily throughout the day, are based on users' subscription status and time on the platform.

The limit announcement began with verified accounts being capped at reading 6,000 posts daily. Unverified accounts were limited to 1/10th that amount at 600 per day. New verified accounts were first capped at 300 per day.

In a tweet later in the day, Musk announced that the rate limits would be increasing to 8,000 for verified, 800 for unverified, and 400 for unverified accounts that are new.

— (@)

Musk announced yet another reading limit bump to to 10,000, 1,000, and 500 around 6 p.m. ET Saturday evening.

Using the platform's own trending panel as a gauge, users were less than enthused about the new limits. At various points throughout the day, phrases and hashtags including "d*** Twitter," "wtf Twitter," "Twitter down," "#GoodbyeTwitter," and "rate limit exceeded" were trending.

By around 6 p.m. ET, #RIPTwitter was trending, boasting more than 27,000 tweets.

The website DownDetector, which tracks issues and outages in real time, showed a marked spike in user-reported problems early Saturday morning.


Some users attempting to read tweets and interact on the platform were repeatedly met with an error message.

"Sorry, you are rate limited. Please wait a few moments then try again."

Today's service disruption and announcement of new limitations on reading tweets comes just after an issue the day before that involving the ability to view tweets without being logged into the site, CNN reported. It is not clear if yesterday's issues were a glitch or a policy change.

Friday, Musk described the inability to browse Twitter's web version without being logged in a "temporary emergency measure."

"We were getting data pillaged so much that it was degrading the system for normal users!" Musk also said.

— (@)

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Are Twitter employees covering their tracks right now?



Hero worship is swirling around Elon Musk right as many Conservatives notice an increase in their Twitter followers over the past few days.

But what is the reason for the surge? Crowder explained the importance of context on Thursday's episode of "Louder with Crowder." In this context, Elon is not doing anything on Twitter yet because he is not at the helm yet.

At the helm just yet. What conservatives are witnessing is likely Twitter employees covering their tracks.

In related news, Crowder tweeted about the Biden administration's decision to create a new misinformation governance board.

"The government is creating a misinformation governance board. Who else did something like that? Oh, I remember, the Nazis. And some data shows some interesting things going on post-@elonmusk's Twitter takeover," Crowder wrote.

"Discomforting," Musk replied.

Watch the clip below for more from Crowder. Can't watch? Download the podcast here.


Want more from Steven Crowder?

To enjoy more of Steven’s uncensored late-night comedy that’s actually funny, join Mug Club — the only place for all of Crowder uncensored and on demand.