Trump signs bipartisan bill tackling 'horrific' revenge porn, AI deepfakes



President Donald Trump signed the Take It Down Act into law on Monday, a bill that toughens penalties for the distribution of revenge porn and AI-generated deepfakes.

The bill was first introduced by Republican Sen. Ted Cruz of Texas and Democratic Sen. Amy Klobuchar of Minnesota before it was additionally spearheaded by first lady Melania Trump. The bill, which passed both the House and the Senate with overwhelming bipartisan support, aims to hold both individuals and platforms accountable for distributing nonconsensual materials.

'This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused.'

RELATED: Fiscal hawks send warning as 'big, beautiful bill' clears high-stakes vote: 'We have to do more to deliver'

Photo by Kayla Bartkowski/Getty Images

"The TAKE IT DOWN Act gives victims of revenge and deepfake pornography — many of whom are young girls — the ability to fight back," Cruz said in a statement. "Under our bipartisan bill, those who knowingly spread this vile material will face criminal charges, and Big Tech companies must remove exploitative content without delay."

"As we worked on the TAKE IT DOWN Act, more victims courageously came forward to share their stories to help end this horrific online abuse," Cruz added.

The bill criminalizes individuals and platforms that "knowingly publish" deepfakes or revenge porn and requires platforms to remove the materials within 48 hours of notification. Although the majority of states already have laws prohibiting the dissemination of this nonconsensual content, the Take It Down Act implements these regulations at the federal level.

"This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused," Melania Trump said at a press conference before the bill signing.

RELATED: Vance tells Glenn Beck Congress needs to 'get serious' about codifying DOGE cuts

Photo by JIM WATSON/AFP via Getty Images

"As a father of three young girls, I’m deeply concerned about the rise of deepfakes and nonconsensual intimate images in our country. It is sickening, it is harmful, and it must be stopped — and this law is a major step forward in protecting victims and restoring online accountability," Republican Rep. August Pfluger of Texas told Blaze News.

"I was proud to co-lead this legislation in the House and commend Rep. Salazar, Senator Cruz, and first lady Melania Trump for their leadership in driving it across the finish line," Pfluger added. "I also thank President Trump for taking decisive action to cement this legislation into law."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Congress Should Extend The ‘Take It Down’ Act With These 3 Rules Protecting Kids From AI

Lawmakers should include, at a minimum, these three guiding principles in a national framework bill for AI.

‘Take It Down Act’ targets deepfake perverts exploiting teens online



Elliston Berry was 14 years old when a classmate used an AI editing app to turn her social media photo into a deepfake nude. He circulated the fake image on Snapchat. The next day, similar deepfake images of eight more girls spread among classmates.

The victims’ parents filed a Title IX complaint. Authorities charged the student who created the images with a class A misdemeanor. Still, the deepfake nudes stayed online. Berry’s mother appealed to Snapchat for more than eight months to remove the images. Only after U.S. Sen. Ted Cruz (R-Texas) personally contacted the company did Snapchat finally take the pictures down.

The Take It Down Act would make it illegal to knowingly publish ‘nonconsensual intimate imagery’ depicting real, identifiable people on social media or other online platforms.

As AI becomes cheaper and more accessible, anyone can create exploitative digital content — and anyone can become a victim. In 2023, one in three deepfake tools allowed users to produce AI-generated pornography. With just one clear photo, anyone could create a 60-second pornographic video in under 25 minutes for free.

The explosion of deepfake pornography should surprise no one. Pornography accounted for 98% of all online deepfake videos in 2023. Women made up 99% of the victims.

Even though AI-generated images are fake, the consequences are real — humiliation, exploitation, and shattered reputations. Without strong laws, explicit deepfakes can haunt victims forever, circulating online, jeopardizing careers, and inflicting lifelong damage.

First lady Melania Trump has made tackling this crisis an early priority — and she’s right. In the digital age, technological advancement must come with stronger protections for kids and families online. AI’s power to innovate also carries a power to destroy. To curb its abuse, the first lady has championed the Take It Down Act, a bipartisan bill sponsored by Cruz and Sen. Amy Klobuchar (D-Minn.).

The bill would make it illegal to knowingly publish “nonconsensual intimate imagery” depicting real, identifiable people on social media or other online platforms. Crucially, it would also require websites to remove such images within 48 hours of receiving notice from a victim.

The Take It Down Act marks an essential first step in building federal protections for kids online. Pornography already peddles addiction in the guise of pleasure. AI-generated pornography, created without the subject’s knowledge or consent, takes the exploitation even further. Deepfake porn spreads like wildfire. One in eight teenagers ages 13 to 17 know someone who has been victimized by fake nudes.

The bill also holds AI porn creators accountable. Victims would finally gain the legal means to demand removal of deepfake images from social media and pornography sites alike.

Forty-nine states and Washington, D.C., ban the nonconsensual distribution of real intimate images, often called “revenge porn.” As AI technology advanced, 20 states also passed laws targeting the distribution of deepfake pornographic images.

State laws help, but they cannot fully protect Americans in a borderless digital world. AI-generated pornography demands a federal solution. The Take It Down Act would guarantee justice for victims no matter where they live — and force websites to comply with the 48-hour removal rule.

We are grateful that the first lady has fought for this cause and that the Senate has acted. Now the House must follow. With President Trump’s signature, this critical protection for victims of digital exploitation can finally become law.

Ohio Could Be The Next State To Protect Kids Online, Unless The Porn Industry Gets Its Way

Ohio has introduced a bill to protect kids from online porn and 'deepfakes.' The porn industry has this effort in its crosshairs.

Federal Judge Stops Newsom’s Assault On Political Speech

The decision is a significant victory for the First Amendment, which has been under constant assault from leftists like the California governor.

Gavin Newsom tries to ban AI memes and becomes the STAR of one



Parodies are everywhere this election season, and many of them are quite funny — especially when it comes to Kamala Harris.

One video circulating on social media shows the vice president seemingly exposing herself as an incompetent candidate for president. Elon Musk, who’s a fan of a good joke, decided to retweet this video on his own X account, which apparently made Gavin Newsom mad.

“I just signed a bill to make this illegal in the state of California. You can no longer knowingly distribute an ad or other election communication that contains materially deceptive content — including deepfakes,” the California governor wrote in a post on X.

The post was a quote tweet of an earlier post Newsom had written complaining about the fake ad. In the original post he wrote, “Manipulating a voice in an ‘ad’ like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is.”

“If they think that a law is somehow going to stop what is coming via AI and editing and everything else, they’re just completely crazy,” Dave Rubin of “The Rubin Report” says to his guests, Dr. Drew Pinsky and Karol Markowicz.

“Satire is covered by the First Amendment. That’s been established,” Markowicz comments. “I’m glad he solved all the other problems in California, obviously, and it’s finally got to this,” she adds, laughing.

All jokes aside, Markowicz isn’t pleased that Newsom’s set his sights on this issue.

“It is scary that he’s attempting it. It’s attempting to shut down speech with which he disagrees. And that speech you know, again, covered by the First Amendment, is satire,” she says.

However, Newsom’s fight against AI ads isn’t really working in his favor, as the Babylon Bee made an AI ad of the California governor explaining how incompetent he is.

“My policies were so effective, that almost one million people are now fleeing the state every year,” Newsom’s AI voice says. The voice also relayed that “bigots shouldn’t be allowed to have kids” and that Kamala will “do to the country everything I did in California.”

The video finishes with the sign off, “Thank you, and science bless America.”

“Putting aside the fact that it’s actually just like the Kamala video, there’s more truth in that than a real Gavin Newsom ad,” Rubin comments.


Want more from Dave Rubin?

To enjoy more honest conversations, free speech, and big ideas with Dave Rubin, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Will The Taylor Swift Deepfake Scandal Force Congress To Get Serious About AI Pornography?

Washington tends to ignore problems that harm the little guy, but something might get done now that someone with power and influence has been affected.

Deepfake Pornography Reveals Yet Another Risk Posed By Artificial Intelligence

If artificial intelligence is to be integrated into our society, we have to prevent things like deepfake pornography.

New Amazon voice-cloning technology raises deepfake concerns



For years, experts have been debating the issues raised by deepfake technology. Congress and the military have discussed how to prepare the country for the fraud that is likely as deepfakes become more realistic. Now, new advancement by Amazon has taken that technology to the next level.

At Amazon’s re:MARS conference in June, Rohit Prasad — head scientist and vice president of Alexa AI — demonstrated how Amazon scientists could recreate any voice based on just a one-minute audio sample.

Amazon’s original Alexa voice debuted in November 2014. In its initial years, the voice was heavily critiqued. In 2017, VentureBeatwrote, “Alexa is pretty smart, but no matter what the A.I.-powered assistant talks about, there’s no getting around its relatively flat and monotone voice.”

Now, however, text-to-speech (TTS) technology has made major advancements towards more realistic — some argue, too realistic — speech.

As Fast Companyreports, in an effort to create more expressive and natural-sounding voices, Amazon, Google, Microsoft, Baidu, and other major players in text-to-speech have all in recent years adopted some form of “neural TTS.” Neural TTS uses deep-learning neural networks trained on human speech and can convert any text input into human-sounding speech. Neural systems are capable of learning “not just pronunciation but also patterns of rhythm, stress, and intonation.”

Amazon hasn’t announced when this new voice-cloning capability will be available to developers and the public.

Governments everywhere are struggling to figure out how to adapt to these latest advancements. In America, most deepfakes are considered protected free speech, at least for now. Still, some states have attempted to take action against nefarious uses of the technology. In New York, commercial use of a performer's synthetic likeness without consent is banned for 40 years after the performer's death, according to CBS News. California and Texas prohibit deceptive political deepfakes before elections.

While concerns about realistic voice-cloning technology are rampant, developers of the technology, like Amazon, are optimistic. In an email to Fast Company, an Amazon spokesperson wrote: “Personalizing Alexa’s voice is a highly desired feature by our customers, who could use this technology to create many delightful experiences. We are working on improving the fundamental science that we demonstrated at re:MARS and are exploring use cases that will delight our customers, with necessary guardrails to avoid any potential misuse.”

Cheer mom sent coaches 'deepfake' nude videos of daughter's rivals to force them off squad: DA



A Pennsylvania mother has been criminally charged over allegations that she created "deepfake" videos of girls on her teenage daughter's cheerleading squad and sent them to the team's coaches in an effort to force the girls from the squad.

The images falsely depicted the alleged victims in the nude, drinking alcohol, and smoking.

What are the details?

Prosecutors say Raffaela Spone, 50, of Chalfont, created fake videos and photos falsely depicting at least three of her daughter's teammates on the Victory Vipers cheerleading squad, and sent them anonymously to coaches and the girls themselves, The Philadelphia Inquirer reported.

Bucks County District Attorney Matt Weintraub told the newspaper that the alleged victims informed police that Spone had sent them "manipulated images" in anonymous messages and "urged them to kill themselves."

One of the girls' parents called police in July of last year, after their daughter received the harassing messages. According to the outlet, "her parents were concerned, they told police, because the videos could have caused their daughter to be removed from the team."

The Inquirer reported that not only the girl but "her coaches at Victory Vipers were also sent photos that appeared to depict her naked, drinking, and smoking a vape." Two more squad members came forward thereafter reporting similar stories.

Investigators determined that the phots and videos were "deepfakes" — digitally altered forms of media that imposed real photos of the girls (from social media and elsewhere) into programs to create realistic-looking images.

PennLive reported that Spone is facing three counts of cyber harassment of a child and three counts of harassment. There is no evidence at this point that her daughter was aware of what her mother is accused of doing.

Anything else?

Deepfakes have made headlines of late due to strikingly realistic fake videos depicting actor Tom Cruise that have circulated the internet.

CinemaBlend reported that the fakes of Cruise "are so seamless even security analysts are worried," but the creator of the videos, Chris Ume, says the public shouldn't be concerned because "each clip takes weeks of work."

Here is a montage of the deepfake Tom Cruise videos:

Tom Cruise VIRAL Deepfakes on TikTok Causing Security Concerns www.youtube.com