Biden’s COVID censorship machine takes a hit: Missouri wins landmark ban on federal threats to Big Tech



A landmark settlement delivered a blow to the censorship industrial complex that silenced Americans during the COVID era.

Sen. Eric Schmitt (R-Mo.) announced Tuesday that Missouri had reached a settlement agreement with the U.S. government in its Missouri v. Biden lawsuit, which accused the Biden administration of violating Americans' First Amendment rights by directing social media companies to censor speech challenging the government's COVID messaging.

'For every working Missouri family tired of being silenced by their own government: this victory is yours.'

Schmitt filed the lawsuit against the Biden administration while serving as Missouri attorney general, before securing his Senate seat.

The agreement included a 10-year Consent Decree that enforces a narrow permanent injunction on the surgeon general, the Centers for Disease Control and Prevention, and the Cybersecurity and Infrastructure Security Agency. The injunction prevents them from threatening social media companies with any form of punishment if those companies fail to remove or suppress content that contains protected speech.

However, this ban applies only to posts made on Facebook, Instagram, X, LinkedIn, and YouTube by the specific plaintiffs in the case, including Missouri and Louisiana government officials and agencies acting in their official capacity. It does not extend to other social media networks or content posted by the general public.

"The Parties also agree that government, politicians, media, academics, or anyone else applying labels such as 'misinformation,' 'disinformation,' or 'malinformation' to speech does not render it constitutionally unprotected," the agreement reads.

The court must first approve this settlement agreement.

RELATED: BlazeTV's 'The Coverup' exposes how the censorship industrial complex silenced Americans during COVID

Eric Schmitt. Photo by Anna Moneymaker/Getty Images

"We just won Missouri v. Biden," Schmitt wrote in a post on X. "As Missouri's Attorney General, I sued the Biden regime for brazenly colluding with Big Tech to silence Missouri families — censoring the truth about COVID, the Hunter Biden laptop, the open border, and the 2020 election. They tried to turn Facebook, X, YouTube, and the rest into their private speech police, labeling dissent 'misinformation' while they pushed their narrative on the American people."

Schmitt called the Consent Decree the "first real, operational restraint on the federal censorship machine."

He explained that it "directly binds the Surgeon General, the CDC, and CISA: no more threats of legal, regulatory, or economic punishment. No more coercion. No more unilateral direction or veto of platform decisions to remove, suppress, deplatform, or algorithmically bury protected speech."

"For every working Missouri family tired of being silenced by their own government: this victory is yours. The heartland fought back, and the heartland delivered," Schmitt concluded.

RELATED: 'Karma is a b***h': Trump taps epidemiologist targeted by Biden admin and censored online to run NIH

Photo by Matt Cardy/Getty Images

Benjamin Weingarten, a senior contributor at the Federalist, addressed the victory's narrow application.

"This decree is limited to the plaintiffs, but as precedent, and practically, its impact may prove orders of magnitude more powerful in protecting disfavored speech," Weingarten wrote, calling it "a momentous blow for the First Amendment."

National Institutes of Health Director Jay Bhattacharya, who had to withdraw as a plaintiff in the case after being appointed by the Trump administration, called the settlement "a huge win for all Americans."

"Huzzah! The consent decree in Missouri v. Biden is a historic victory for free speech in the US. Though I had to switch to the government side in the case after I became NIH director, I've never been more pleased by 'losing' in my life," he wrote.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

California’s next dumb tech idea: Show your papers to scroll



California has a habit of importing some of the worst tech-regulation ideas from overseas. After lawmakers enacted a censorial statute cribbed from the U.K. in 2022 — and watched it run headlong into an injunction — the Golden State now appears eager to borrow from Australia, which in December barred children from major social media platforms.

Earlier this month, California lawmakers introduced a bill to impose “a minimum age requirement to open or maintain a social media account.” Governor Gavin Newsom (D), who usually avoids weighing in on pending bills, publicly endorsed the idea.

Will America keep light-touch rules that protect consumers without strangling innovation — or import Europe’s heavy-handed, fear-driven approach?

However well intentioned, the Australian model collapses on prudential grounds. In the United States, it also invites a swift constitutional challenge — and likely a swift defeat in court.

Most proposals that force platforms to distinguish between adults and minors require age verification. That means users must hand over sensitive personal information — usually government ID documents or biometric data — as the price of entry to the platforms where everyday digital life happens. Once companies collect, process, and store that data, it becomes a tempting target. Hackers do not need ideology, only opportunity.

The roster of victims reads like Don Giovanni’s catalogue. The list includes corporations such as Target, Equifax, Marriott, Capital One, MGM Resorts, and T-Mobile. Platforms from Facebook to X.com to the “Tea” app were also hit. So were third-party verification services. Even in France, where regulators tried to build a privacy-protective system, a third-party age verifier exposed sensitive user data. In the digital age, breaches and leaks are simply a fact of life.

Legislation promoted as “child protection” thus runs into a basic contradiction: it can expose children to new forms of harm. As the R Street Institute and Experian have reported, 25% of minors will become victims of identity fraud or theft before they turn 18. Age-verification mandates would widen the attack surface and increase the odds that minors’ information gets stolen, misused, or sold — and that families spend years cleaning up the wreckage.

Some advocates now treat constitutional objections to “child-safety” bills as impolite. Courts don’t share that squeamishness. In recent years, judges have enjoined multiple constitutionally defective state laws, leaving behind little more than wasted taxpayer dollars and public frustration, while state attorneys general mount doomed defenses.

Newsom’s favored approach also clashes with a Supreme Court precedent California already lost: Brown v. Entertainment Merchants Association. In that 2011 case, the court struck down a California law that restricted minors’ access to violent video games. Justice Antonin Scalia’s majority opinion applied strict scrutiny — a demanding standard — and rejected the state’s argument that the law simply “helped” parents.

Scalia’s point applies with even greater force here. A sweeping ban on minors’ access to social media would function less as parental support and more as state substitution. The state would not merely empower parents; it would decide what parents should want, then impose that judgment across the board.

RELATED: Kids have already found a way around Australia's new social media ban: Making faces

David GRAY/AFP/Getty Images

In American law, parents generally hold the duty — and the right — to decide what media their children consume. That principle does not stop at the edge of the internet.

The broader fight over technology policy often turns on a single question: Will America stick with light-touch, sensible regulation that protects consumers without strangling innovation — or will it import the heavy-handed, fear-driven regulatory posture popular abroad, especially in Europe?

The American technology sector grew and thrived in the internet era. Many foreign regimes, more focused on expansive “safety” mandates than innovation, privacy, or consumer benefit, have not.

Lawmakers should borrow good ideas wherever they find them. But California keeps shopping in the wrong aisle. If Sacramento wants to protect kids, it should start with tools that don’t require building a mass ID-check system for the entire public — and that don’t hand criminals a richer trove of data to steal.

It’s wise to learn from other countries. It’s foolish to copy their worst mistakes.

'Large human smuggling operation' uncovered in Texas? ICE makes alarming claim about 'alien from India.'



While immigration enforcement has faced some hurdles, including a partial government shutdown, law enforcement has continued to take down criminals. In a major score for Houston Immigration and Customs Enforcement, authorities announced the arrest of two people who allegedly ran a major illegal operation.

On Wednesday morning, the official United States Customs and Immigration Services X, Facebook, and Instagram accounts announced the arrest of an "alien from India" and his "spouse" in Texas, where they were allegedly running a "large human smuggling operation."

'He and his spouse were apprehended ... on charges of human smuggling, document fraud, and overstaying their visa.'

"He and his spouse were apprehended at our Houston office by @ICEgov on charges of human smuggling, document fraud, and overstaying their visa," USCIS wrote.

"Human traffickers will be caught and held accountable," the account added.

RELATED: No more 'safe harbor for illegals': Colony Ridge settles with DOJ, Texas

Photo by Stephen Maturen/Getty Images

A USCIS spokesperson referred Blaze News to ICE for comment since ICE made the arrests.

Blaze News reached out to the DHS, ICE, and its Houston field office for comment but did not receive a response.

This is a developing story. Check back for updates.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'I am going to kill Donald Trump': Smug Democrat candidate threatens death penalty in latest campaign trick



While the Trump administration continues trying to put out real and proverbial fires started by Democrats, more are igniting across the country.

Now a Democratic candidate appears to be promising to kill the president as part of his campaign platform.

'That kind of vile comment makes it clear that Elliot Forhan is not qualified to be attorney general.'

On Tuesday, a video went viral of Ohio attorney general candidate Elliot Forhan (D) promising to "kill Donald Trump" if elected.

"I want to tell you what I mean when I say that I am going to kill Donald Trump," Forhan, a former Ohio state representative, said in a video posted to Facebook.

RELATED: 'Convicted and f**king dangles': NeverTrumper Rick Wilson calls for execution of top White House adviser

Current Ohio Attorney General Dave Yost (R); Bill Clark/CQ-Roll Call, Inc via Getty Images

"I mean I'm going to obtain a conviction rendered by a jury of his peers at a standard of proof beyond a reasonable doubt, based on evidence, presented at a trial, conducted in accordance with the requirements of due process, resulting in a sentence, duly executed, of capital punishment," Forhan said in the video.

In the clip, he did not indicate which crimes worthy of the death penalty he thought President Donald Trump has committed.

The Republican attorney general candidate for Ohio, Keith Faber, promptly posted a response to Forhan's unhinged rant.

"That kind of vile comment makes it clear that Elliot Forhan is not qualified to be attorney general," Faber said. "Look, it is important that [gubernatorial candidate] Amy Acton and the other Democrats on the ticket call him out for such conduct."

This isn't the first time Forhan has faced public scrutiny for his rhetoric. Just days after Charlie Kirk was assassinated, Forhan made a Facebook post that said, "Violence is wrong. F**k Charlie Kirk."

Faber didn't miss his chance to remind people of that vile comment from Forhan: "Add to that his recent celebration of the assassination of Charlie Kirk, and you see just what kind of individuals the Democrats are running for attorney general."

Forhan has also faced backlash and professional consequences for what some have alleged to be "erratic and abusive" behavior involving a female constituent and others, according to a 2023 article by Fox News.

Forhan was never charged with a crime, though he was stripped of his legislative privileges and committee assignments as an Ohio legislator in the last General Assembly amid allegations and an investigation into his conduct, according to Statehouse News Bureau last February.

The primary election in Ohio will be held on May 5.

Ohio Attorney General Dave Yost (R) did not respond to a request for comment from Blaze News.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

AI in education: Innovation or a predator’s playground?



For years, parents have been warned to monitor their children’s online activity, limit social media, and guard against predatory digital spaces. That guidance is now colliding with a very different message from policymakers and technology leaders: Artificial intelligence must be introduced earlier and more broadly in schools.

When risky platforms enter through schools, they inherit an unearned legitimacy, conditioning parents to trust tools they would never allow at home.

On its face, this goal sounds reasonable. But what began as a policy push has quickly turned into something far more concerning — a rush by major tech companies to brand themselves as “AI Education Partners,” gaining access to public education under the banner of innovation, often without parents being fully informed or given the ability to opt out. When risky platforms enter through schools, they inherit an unearned legitimacy, conditioning parents to trust tools they would never allow at home.

AI in education is being sold as inevitable and benevolent. Behind the buzzwords lies a harder truth: AI is becoming a back door for Big Tech to access children and sidestep parental authority.

Platforms already under fire for child safety

At the center of this debate are three companies — Meta, Snap, and Roblox — all now positioning themselves as AI education partners while facing active litigation and investigations tied to child exploitation, predatory behavior, and failures to protect minors.

Meta is facing lawsuits and regulatory actions related to child exploitation, unsafe platform design, and illegal data practices. Internal company documents revealed that Meta’s AI chatbots were permitted to engage minors in flirtatious, intimate, and even health-related conversations — policies the company only revised after media exposure.

European consumer watchdogs have also accused Meta of sweeping data collection practices that go far beyond what users reasonably expect, using behavioral data to profile emotional state, sexual identity, and vulnerability to addiction. Regulators argue that meaningful consent is impossible at such a scale. Meta has also claimed in U.S. courts that publicly available content can be used to train AI under “fair use,” raising serious questions about how student classroom work could be treated once ingested by AI systems.

Snapchat is facing lawsuits from multiple states, including Kansas, New Mexico, Utah, and others, alleging that its platform exposes minors to drug and weapons dealing, sexual exploitation, and severe mental health harm. In January 2025, federal regulators escalated concerns by referring a complaint involving Snapchat’s AI chatbot to the Department of Justice.

Despite this record, Snap signed on as an AI education partner, promising “in-app educational programming directed toward teens to raise awareness on safe and responsible use of AI technologies.”

Roblox, long flagged by parents for safety concerns, is being sued by multiple states, including Iowa, Louisiana, Texas, Tennessee, and Kentucky, over allegations that it enabled predators to groom and exploit children. Yet Roblox now seeks classroom access as an “AI learning” platform.

If these platforms are too dangerous for children at home, they are too dangerous to normalize at school. Allowing companies with a history of child-safety failures to integrate themselves into classrooms is negligent and dangerous.

The contradiction no one wants to address

The danger becomes clearer when you step outside the classroom.

Across the country, states including Florida, Tennessee, Louisiana, and Connecticut are restricting minors’ access to social media through age verification, parental consent, and limits on addictive features. At the federal level, the bipartisan Kids Off Social Media Act seeks to bar social media access for children under 13 and restrict algorithmic targeting of teens.

For more than a century, the Supreme Court has recognized that parents — not the state and not corporations — hold the fundamental right to direct their children’s education.

When Big Tech gains access to classrooms without transparency or consent, that authority is eroded. Parents are told to restrict social media at home while schools integrate the same platforms through AI. The result is families being sidelined while Big Tech reduces their children to data sources.

RELATED: Why every conservative parent should be watching California right now

Photo by AaronP/Bauer-Griffin/GC Images/Getty Images

This dangerous escalation must meet a clear boundary. Some platforms endanger children, others monetize them, and some expose their data. None of them belong in classrooms without strict, enforceable guardrails.

Parents do not need more promises. They need enforceable limits, transparency, and the unquestioned right to say no. The Constitution has long recognized that the right to direct a child’s education belongs to parents, not Silicon Valley. That authority does not stop at the classroom door.

If artificial intelligence is going to enter our classrooms, it must do so on the terms of families,not tech companies.

'Advocate for the Democratic Party': Democrat judge loses free-speech appeal over partisan social media posts



Last week, the Pennsylvania Supreme Court issued an opinion on the free-speech parameters for sitting judges in the commonwealth in a major decision related to a 2024 case.

The case concerned former Judge Mark B. Cohen, a Democrat who was suspended from the Philadelphia County Court of Common Pleas by the Court of Judicial Discipline in October 2024 over his outspoken political posts on Facebook.

'When, as here, a sitting judge adopts the persona of a political party spokesperson and abuses the prestige of his office to advance that party’s interests, he detracts from the reputation of the entire judiciary.'

These posts, the Philly Voice reported, involved, for example, Cohen's views about former Rep. Liz Cheney (R-Wyo.), the hammer attack on California Democrat Rep. Nancy Pelosi's husband, and the election of Democrat Pennsylvania Governor Josh Shapiro, among other national and state political issues.

RELATED: Activist Democrat judge sabotages National Guard surge in Memphis after 100 children rescued

Photo by MARTIN BUREAU/AFP via Getty Images

The opinion outlined some of the other issues that Cohen advocated for on Facebook, demonstrating his apparent partisanship.

"Judge Cohen advocated for legislation, such as the Build Back Better Bill that was then being promoted by the Democratic Party, cheered on Democratic politicians, impliedly endorsed a candidate for congressional office, touted his own legislative achievements as a Democrat, and criticized the policies of predominately Republican legislatures."

Cohen previously served as a Democrat Pennsylvania state representative from 1974 to 2016 prior to his election to the Court of Common Pleas in 2018, according to his biography on the Pennsylvania House of Representatives website.

Six of the seven justices on the Pennsylvania Supreme Court affirmed the opinion. However, Justice Wecht filed a concurring opinion in which he expressed his refusal to endorse "any suggestion that a jurist" who formerly served in a political branch of government "may not in some appropriate fashion refer to ... his or her record of actions taken or accomplishments achieved while serving."

The seventh, Justice McCaffery, did not participate in the decision or deliberation of the case.

While the court affirmed that judges are in fact uniquely qualified to share their professional opinions on some matters, the issue with Cohen's posts was consistently the "volume and tone" of the content he was sharing.

Justice Dougherty, who wrote the opinion of the court, said, "Thus, Judge Cohen did not put just his own reputation at risk. When, as here, a sitting judge adopts the persona of a political party spokesperson and abuses the prestige of his office to advance that party’s interests, he detracts from the reputation of the entire judiciary."

The opinion of the court upheld the CJD's concerns "not just that Judge Cohen publicly posted his personal, political views, but that he posted so regularly and one-sidedly that he appeared to be 'an advocate for the Democratic Party.'"

The Court concluded that "the Commonwealth's interest in protecting the efficiency of the administration of justice outweighed Judge Cohen's interest in posting partisan political content on Facebook where the volume and tone of his posts cast him as little more than a spokesperson for the Democratic Party."

Cohen's lawyer, Samuel Stretton, suggested that Cohen is considering an appeal before the U.S. Supreme Court.

"It's very important for a judge to have the right to be involved in issues that don't come before them or their colleagues," Stretton said.

According to prior court documents, Cohen, 77, was suspended without pay in October 2024 through December 31, 2024, at which point he was mandated to retire due to age. Based on available court documents, it is not clear whether his benefits would continue, though his legal counsel appealed the decision to suspend his medical benefits.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Meta accused of deleting scam ads to dodge government regulation



Meta says it deleted ads off its platforms to get rid of scams, not hide them.

A review of internal documents, however, spurred allegations that Meta was attempting make certain ads "not findable" to government regulators.

'To suggest otherwise is disingenuous.'

According to a report by Reuters — which said it reviewed the docs — Meta began deleting possible fraudulent ads from its search function after Japanese regulators were upset over obvious scams on Facebook and Instagram that pushed fake celebrity product endorsements or investment schemes.

Reuters said that, according to the documents, Meta feared Japan would force the company to verify the identities of its advertisers.

In order to test Meta's work on "tackling scams," Japanese regulators allegedly used the search function on Meta's "Ad Library" to seek out fraudulent ads; the library acts as a "comprehensive, searchable database for ads transparency," the company states on its website.

This "simple test," as described in documents, was allegedly the avenue Meta took to make good with the regulators. Documents purportedly showed that Meta identified the top keywords and celebrity names that the Japanese were searching to find fraud, and then deleted ads that appeared fraudulent.

RELATED: OOF: Mark Zuckerberg's losing metaverse bet cost Meta $77B

Photo by Arda Kucukkaya/Anadolu via Getty Images

The deletions made certain content "not findable" for "regulators, investigators, and journalists," Reuters claimed.

A few months later, a Meta memo allegedly stated that "less than 100" of the unwanted ads had been discovered in the last week of a testing period, "hitting 0 for the last 4 days of the sprint."

This was apparently applauded by the Japanese government, and Japan did not end up forcing advertiser verification.

Meta then reportedly added the deletion tactics to its "general global playbook" to be deployed against, as Reuters described, regulatory scrutiny in other markets like the U.S., Europe, Australia, and more. The alleged playbook was a strategy to stall regulators and prevent advertiser verification requirements, the report claimed.

A Meta spokesperson has since called the allegations disingenuous, and argued that Meta deleting fraudulent ads off its platforms is a good thing, not bad.

Meta spokesman Andy Stone told the outlet that there is nothing misleading about removing the scam ads from the library. "To suggest otherwise is disingenuous," he insisted.

RELATED: 2025 is so over and so is virtual reality

Photographer: Kiyoshi Ota/Bloomberg via Getty Images

"Meta teams regularly check the Ad Library to identify scam ads because when fewer scam ads show up there that means there are fewer scam ads on the platform," Stone added.

On top of claiming that verifying advertisers is "not a silver bullet," Stone said that chasing down scam ads is a job that will "never end."

Verification "works best in concert with other, higher-impact tools," the spokesman noted. "We set a global baseline and aggressive targets to drive down scam activity in countries where it was greatest, all of which has led to an overall reduction in scams on platform."

Meta also claimed that it has seen a 50% decline in user reports of scams over the past year.

Return reached out to Meta for additional comments. This article will be updated with any applicable responses.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Kids have already found a way around Australia's new social media ban: Making faces



The liberal-dominated Australian parliament passed an amendment to its online safety legislation last year, imposing age restrictions for certain social media platforms.

As of Dec. 10, minors in the former penal colony are prohibited from using various platforms, including Facebook, Reddit, Snapchat, TikTok, X, and YouTube — platforms that face potential fines exceeding $32 million should they fail to prevent kids from creating new accounts or from maintaining old accounts.

Australian kids were quick, however, to find a workaround: distorting their faces to appear older.

'They know how important it is to give kids more time to just be kids.'

Numerous minors revealed to the Telegraph that within minutes of the ban going into effect, they were able to get past their country's new age-verification technology by frowning at the camera.

Noah Jones, a 15-year-old boy from Sydney, indicated that he used his brother's ID card to rejoin Instagram after the app flagged him as looking too young.

Jones, whose mother supported his rebellion and characterized the law as "poor legislation," indicated that when Snapchat similarly prompted him to verify his age, "I just looked at [the camera], frowned a little bit, and it said I was over 16."

RELATED: App allegedly endangers ICE agents — now its creator is suing the Trump administration

Australian Prime Minister Anthony Albanese. Photo by DAVID GRAY / AFP via Getty Images.

Jones suggested to the Telegraph that some teens may alternatively seek out social media platforms the Australian government can't regulate or touch.

"Where do you think everyone's going to?" said Jones. "Straight to worse social media platforms — they're less regulated, and they're more dangerous."

Zarla Macdonald, a 14-year-old in Queensland, reportedly contemplated joining one such less-regulated app, Coverstar. However, she has so far managed to stay on TikTok and Snapchat because the age-verification software mistakenly concluded she was 20.

"You have to show your face, turn it to the side, open your mouth, like just show movement in your face," said Macdonald. "But it doesn't really work."

Besides fake IDs and frowning, some teens are apparently using stock images, makeup, masks, and fake mustaches to fool the age-verification tech. Others are alternatively using VPNs and their parents' accounts to get on social media.

The social media ban went into effect months after a government-commissioned study determined on the basis of a nationally representative survey of 2,629 kids ages 10 to 15 that:

  • 71% had encountered content online associated with harm;
  • 52% had been cyberbullied;
  • 25% had experienced online "hate";
  • 24% had experienced online sexual harassment;
  • 23% had experienced non-consensual tracking, monitoring, or harassment;
  • 14% had experienced online grooming-type behavior; and
  • 8% experienced image-based abuse.

Australian Prime Minister Anthony Albanese said in a statement on Wednesday, "Parents, teachers, and students are backing in our social media ban for under-16s. Because they know how important it is to give kids more time to just be kids — without algorithms, endless feeds and online harm. This is about giving children a safer childhood and parents more peace of mind."

The picture accompanying his statement featured a girl who in that moment expressed opposition to the ban.

The student in Albanese's poorly chosen photo is hardly the only opponent to the law.

Reddit filed a lawsuit on Friday in Australia's High Court seeking to overturn the ban. The U.S.-based company argued that the ban should be invalidated because it interfered with free political speech implied by Australia's constitution, reported Reuters.

Australian Health Minister Mark Butler suggested Reddit was not suing to protect young Aussies' right to political speech but rather to protect profits.

"It is action we saw time and time again by Big Tobacco against tobacco control, and we are seeing it now by some social media or Big Tech giant," said Butler.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Australia BANS key social media apps for kids under 16 — and platforms must enforce the rule



Australia will put the onus on social media platforms to limit access to children under 16 years old.

The Online Safety Amendment (Social Media Minimum Age) Bill 2024 amended Australia's reigning online safety measures and gave social media companies time to age‐restrict their platforms and "take reasonable steps to prevent Australian under 16s from having account[s]."

'No Australian will be compelled to use government identification.'

Officially taking effect on December 10, the ban includes Facebook, Instagram, Snapchat, Threads, X, and YouTube's general platform; YouTube Kids and WhatsApp do not meet the criteria for the ban.

Australia introduced its social media minimum-age framework that included a list of criteria that would result in a platform being banned for those under 16. This included if a platform's sole purpose, or "significant purpose," is to "enable online social interaction between two or more end‐users."

Or if the service "allows end‐users to link to, or interact with, some or all of the other end‐users" and "allows end‐users to post material on the service" and "meets such other conditions (if any) as are set out in the legislative rules," it will not be available for younger Australians.

The legislation can also specify certain platforms, or classes, to not include in the ban.

Social media platforms will be responsible for enforcement, and neither children nor their parents will face punishment should they gain access. Companies face fines of up to $32 million USD or just under $50 million in Australian dollars.

RELATED: How Texas slammed the gate on Big Tech’s censorship stampede

Photo by DAVID GRAY/AFP via Getty Images

The government further defined the requirements placed upon the platforms, adding that they must "take reasonable steps to prevent" those under 16 from having accounts.

The legislation also specified that "no Australian will be compelled to use government identification (including Digital ID) to prove their age online" and that platforms must offer reasonable alternatives to its users.

According to the BBC, other countries are hot on Australia's tail in terms of implementing their own similar bans. This includes the French government, which has begun a parliamentary inquiry into banning children under 15 years old from social media, while also implementing a "digital curfew" for those between 15 and 18.

The Spanish government has also drafted a law that would require parental consent for children under 16 to access social media.

RELATED: Conservative influencers promote Qatar as a desert paradise — but are they lying?

Photo by DAVID GRAY/AFP via Getty Images

Ruling left-wing Labour Party official Anika Wells, who serves as Australia's communications minister (and minister of sport), said that the ban is not "perfect" and is going to "look a bit untidy on the way through."

"Big reforms always do," she added.

Australians under 16 will still be able to access content that is available on a website without being logged in or being a member, as there is virtually no way to prevent that without restricting access to the internet entirely.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!