Trump and Elon want TRUTH online. AI feeds on bias. So what's the fix?



The Trump administration has unveiled a broad action plan for AI (America’s AI Action Plan). The general vibe is one of treating AI like a business, aiming to sell the AI stack worldwide and generate a lock-in for American technology. “Winning,” in this context, is primarily economic. The plan also includes the sorely needed idea of modernizing the electrical grid, a growing concern due to rising electricity demands from data centers. While any extra business is welcome in a heavily indebted nation, the section on the political objectivity of AI is both too brief and misunderstands the root cause of political bias in AI and its role in the culture war.

The plan uses the term "objective" and implies that a lack of objectivity is entirely the fault of the developer, for example:

Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.

The fear that AIs might tip the scales of the culture war away from traditional values and toward leftism is real. Try asking ChatGPT, Claude, or even DeepSeek about climate change, where COVID came from, or USAID.

Training data is heavily skewed toward being generated during the 'woke tyranny' era of the internet.

This desire for objectivity of AI may come from a good place, but it fundamentally misconstrues how AIs are built. AI in general and LLMs in particular are a combination of data and algorithms, which further break down into network architecture and training methods. Network architecture is frequently based on stacking transformer or attention layers, though it can be modified with concepts like “mixture of experts.” Training methods are varied and include pre-training, data cleaning, weight initialization, tokenization, and techniques for altering the learning rate. They also include post-training methods, where the base model is modified to conform to a metric other than the accuracy of predicting the next token.

Many have complained that post-training methods like Reinforcement Learning from Human Feedback introduce political bias into models at the cost of accuracy, causing them to avoid controversial topics or spout opinions approved by the companies — opinions usually farther to the left than those of the average user. “Jailbreaking” models to avoid such restrictions was once a common pastime, but it is becoming harder, as corporate safety measures, sometimes as complex as entirely new models, scan both the input to and output from the underlying base model.

As a result of this battle between RLHF and jailbreakers, an idea has emerged that these post-training methods and safety features are how liberal bias gets into the models. The belief is that if we simply removed these, the models would display their true objective nature. Unfortunately for both the Trump administration and the future of America, this is only partially correct. Developers can indeed make a model less objective and more biased in a leftward direction under the guise of safety. However, it is very hard to make models that are more objective.

The problem is data

According to Google AI Mode vs. Traditional Search & Other LLMs, the top domains cited in LLMs are: Reddit (40%), YouTube (26%), Wikipedia (23%), Google (23%), Yelp (21%), Facebook (20%), and Amazon (19%).

This seems to imply a lot of the outside-fact data in AIs comes from Reddit. Spending trillions of dollars to create an “eternal Redditor” isn’t going to cure cancer. At best, it might create a “cure cancer cheerleader” who hypes up every advance and forgets about it two weeks later. One can only do so much in the algorithm layer to counteract the frame of mind of the average Redditor. In this sense, the political slant of LLMs is less due to the biases of developers and corporations (although they do exist) and more due to the biases of the training data, which is heavily skewed toward being generated during the "woke tyranny" era of the internet.

In this way, the AI bias problem is not about removing bias to reveal a magic objective base layer. Rather, it is about creating a human-generated and curated set of true facts that can then be used by LLMs. Using legislation to remove the methods by which left-leaning developers push AIs into their political corner is a great idea, but it is far from sufficient. Getting humans to generate truthful data is extremely important.

The pipeline to create truthful data likely needs at least four steps.

1. Raw data generation of detailed tables and statistics (usually done by agencies or large enterprises).

2. Mathematically informed analysis of this data (usually done by scientists).

3. Distillation of scientific studies for educated non-experts (in theory done by journalists, but in practice rarely done at all).

4. Social distribution via either permanent (wiki) or temporary (X) channels.

This problem of truthful data plus commentary for AI training is a government, philanthropic, and business problem.

RELATED: Threads is now bigger than X, and that’s terrible for free speech

Photo by Lionel BONAVENTURE/AFP/Getty Images

I can imagine an idealized scenario in which all these problems are solved by harmonious action in all three directions. The government can help the first portion by forcing agencies to be more transparent with their data, putting it into both human-readable and computer-friendly formats. That means more CSVs, plain text, and hyperlinks and fewer citations, PDFs, and fancy graphics with hard-to-find data. FBI crime statistics, immigration statistics, breakdowns of government spending, the outputs of government-conducted research, minute-by-minute election data, and GDP statistics are fundamentally pillars of truth and are almost always politically helpful to the broader right.

In an ideal world, the distillation of raw data into causal models would be done by a team of highly paid scientists via a nonprofit or a government contract. This work is too complex to be left to the crowd, and its benefits are too distributed to be easily captured by the market.

The journalistic portion of combining papers into an elite consensus could be done similarly to today: with high-quality, subscription-based magazines. While such businesses can be profitable, for this content to integrate with AI, the AI companies themselves need to properly license the data and share revenue.

The last step seems to be mostly working today, as it would be done by influencers paid via ad revenue shares or similar engagement-based metrics. Creating permanent, rather than disappearing, data (à la Wikipedia) is a time-intensive and thankless task that will likely need paid editors in the future to keep the quality bar high.

Freedom doesn't always boost truth

However, we do not live in an ideal world. The epistemic landscape has vastly improved since Elon Musk's purchase of Twitter. At the very least, truth-seeking accounts don’t have to deal with as much arbitrary censorship. Even other media have made token statements claiming they will censor less, even as some AI “safety” features are ramped up to a much higher setting than social media censorship ever was.

The challenge with X and other media is that tech companies generally favor technocratic solutions over direct payment for pro-social content. There seems to be a widespread belief in a marketplace of ideas: the idea that without censorship (or with only some person’s favorite censorship), truthful ideas will win over false ones. This likely contains an element of truth, but the peculiarities of each algorithm may favor only certain types of truthful content.

“X is the new media” is a commonly spoken refrain. Yet both anonymous and public accounts on X are implicitly burdened with tasks as varied and complex as gathering election data, creating long think pieces, and the consistent repetition of slogans reinforcing a key message. All for a chance of a few Elon bucks. They are doing this while competing with stolen-valor thirst traps from overseas accounts. Obviously, most are not that motivated and stick to pithy and simple content rather than intellectually grounded think pieces. The broader “right” is still needlessly ceding intellectual and data-creation ground to the left, despite occasional victories in defunding anti-civilizational NGOs and taking control of key platforms.

The other issue experienced by data creators across the political spectrum is the reliance on unpaid volunteers. As the economic belt inevitably tightens and productive people have less spare time, the supply of quality free data will worsen. It will also worsen as both platforms and users feel rightful indignation at their data being “stolen” by AI companies making huge profits, thus moving content into gatekept platforms like Discord. While X is unlikely to go back to the “left,” its quality can certainly fall farther.

Even Redditors and Wikipedia contributors provide fairly complex, if generally biased, data that powers the entire AI ecosystem. Also for free. A community of unpaid volunteers working to spread useful information sounds lovely in principle. However, in addition to the decay in quality, these kinds of “business models” are generally very easy to disrupt with minor infusions of outside money, if it just means paying a full-time person to post. If you are not paying to generate politically powerful content, someone else is always happy to.

The other dream of tech companies is to use AI to “re-create” the entirety of the pipeline. We have heard so much drivel about “solving cancer” and “solving science.” While speeding up human progress by automating simple tasks is certainly going to work and is already working, the dream of full replacement will remain a dream, largely because of “model collapse,” the situation where AIs degrade in quality when they are trained on data generated by themselves. Companies occasionally hype up “no data/zero-knowledge/synthetic data” training, but a big example from 10 years ago, “RL from random play,” which worked for chess and Go, went nowhere in games as complex as Starcraft.

So where does truth come from?

This brings us to the recent example of Grokipedia. Perusing it gives one a sense that we have taken a step in the right direction, with an improved ability to summarize key historical events and medical controversies. However, a number of articles are lifted directly from Wikipedia, which risks drawing the wrong lesson. Grokipedia can’t “replace” Wikipedia in the long term because Grok’s own summarization is dependent on it.

Like many of Elon Musk’s ventures, Grokipedia is two steps forward, one step back. The forward steps are a customer-facing Wikipedia that seems to be of higher quality and a good example of AI-generated long-form content that is not mere slop, achieved by automating the tedious, formulaic steps of summarization. The backward step is a lack of understanding of what the ecosystem looks like without Wikipedia. Many of Grokipedia’s articles are lifted directly from Wikipedia, suggesting that if Wikipedia disappears, it will be very hard to keep neutral articles properly updated.

Even the current version suffers from a “chicken and egg” source-of-truth problem. If no AI has the real facts about the COVID vaccine and categorically rejects data about its safety or lack thereof, then Grokipedia will not be accurate on this topic unless a fairly highly paid editor researches and writes the true story. As mentioned, model collapse is likely to result from feeding too much of Grokipedia to Grok itself (and other AIs), leading to degradation of quality and truthfulness. Relying on unpaid volunteers to suggest edits creates a very easy vector for paid NGOs to influence the encyclopedia.

The simple conclusion is that to be good training data for future AIs, the next source of truth must be written by people. If we want to scale this process and employ a number of trustworthy researchers, Grokipedia by itself is very unlikely to make money and will probably forever be a money-losing business. It would likely be both a better business and a better source of truth if, instead of being written by AI to be read by people, it was written by people to be read by AI.

Eventually, the domain of truth needs to be carefully managed, curated, and updated by a legitimate organization that, while not technically part of the government, would be endorsed by it. Perhaps a nonprofit NGO — except good and actually helping humanity. The idea of “the Foundation” or “Antiversity,” is not new, but our over-reliance on AI to do the heavy lifting is. Such an institution, or a series of them, would need to be bootstrapped by people willing to invest in our epistemic future for the very long term.

New U.N. Treaty Decriminalizes AI Child Sexual Abuse Images

Children should not have to bear the burden of protecting themselves from exploitation on online technology platforms.

Google boss compares replacing humans with AI to getting a fridge for the first time



The head of Google's parent company says welcoming artificial intelligence into daily life is akin to buying a refrigerator.

Alphabet's chief executive, Indian-born Sundar Pichai, gave a revealing interview to the BBC this week in which he asked the general population to get on board with automation through AI.

'Our first refrigerator .... radically changed my mom's life.'

The BBC's Faisal Islam, whose parents are from India, asked the Indian-American executive if the purpose of his AI products were to automate human tasks and essentially replace jobs with programming.

Pichai claimed that AI should be welcomed because humans are "overloaded" and "juggling many things."

He then compared using AI to welcoming the technology that a dishwasher or fridge once brought to the average home.

"I remember growing up, you know, when we got our first refrigerator in the home — how much it radically changed my mom's life, right? And so you can view this as automating some, but you know, freed her up to do other things, right?"

Islam fired back, citing the common complaints heard from the middle class who are concerned with job loss in fields like creative design, accounting, and even "journalism too."

"Do you know which jobs are going to be safer?" he posited to Pichai.

RELATED: Here's how to get the most annoying new update off of your iPhone

The Alphabet chief was steadfast in his touting of AI's "extraordinary benefits" that will "create new opportunities."

At the same time, he said the general population will "have to work through societal disruptions" as certain jobs "evolve" and transition.

"People need to adapt," he continued. "Then there would be areas where it will impact some jobs, so society — I mean, we need to be having those conversations. And part of it is, how do you develop this technology responsibly and give society time to adapt as we absorb these technologies?"

Despite branding Google Gemini as a force for good that should be embraced, Pichai strangely admitted at the same time that chatbots are not foolproof by any means.

RELATED: 'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones

- YouTube

"This is why people also use Google search," Pichai said in regard to AI's proclivity to present inaccurate information. "We have other products that are more grounded in providing accurate information."

The 53-year-old told the BBC that it was up to the user to learn how to use AI tools for "what they're good at" and not "blindly trust everything they say."

The answer seems at odds with the wonder of AI he championed throughout the interview, especially when considering his additional commentary about the technology being prone to mistakes.

"We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Welcome To The Future, Where AI Grandma Raises The Kid You Bought From The Baby Bank

Fertility tech solved the 'problem' of your body's physical limitations. Now, a new app has solved that of your mother's.

US NEXT? Sightings of humanoid robots spike on the streets of Moscow



Delivery robots have been promoted in Moscow since around 2019, through Russia's version of Uber Eats.

The Yandex.Eats app from tech giant and search engine company Yandex released a citywide fleet of 20 robots across the city that year.

'Yandex plans to release around 1,300 robots per month by the end of 2027.'

By 2023, Yandex added another 50 robots from its third-generation production line, touting a delivery proficiency rating of 87% of orders delivered between eight and 12 minutes.

"About 15 delivery robots are enough to deliver food and groceries in a residential area with a population of 5,000 people," Yandex said at the time, per RT.

However, what started as a few rectangular robots wheeling through the streets has seemingly spiraled into what will become thousands of bots, including both harmless-looking buggies and, perhaps more frightening, bipedal bots.

The news comes as sightings of humanoid robots in Russia are increasing.

RELATED: Cybernetics promised a merger of human and computer. Then why do we feel so out of the loop?

According to TAdvisor, Yandex plans to release around 1,300 robots per month by the end of 2027, for a whopping total of approximately 20,000 machines. The goal is to have a massive fleet of bots for deliveries, as well as supply couriers to other companies, while reducing the cost of shipping.

At the same time, Yandex also announced development of humanoid robots. Videos have recently popped up of a smaller bot walking alongside a delivery bot in 2024, but it is hard to tell if it was real or a human in costume.

RT recently shared a video of a seemingly real bipedal bot running through the streets of Moscow with a delivery on its back. The bot also took time to dance with an old man, for some reason.

However, it is hard to believe that any Russian autonomous bots are ready for mass production given the recent demo showcased at a technology event in Moscow.

RELATED: 'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones

Aldol, a robot developed by a company of the same name, was described as Russia's first anthropomorphic bot powered by AI.

Last week, the robot was brought on stage and took a few shaky steps while waving to the audience before tumbling robo-face-first onto the floor. Two presenters dragged the robot off stage as if they were rescuing a wounded comrade, while at the same time a third member of the team struggled to put a curtain back into place to hide the debacle.

Still, Yandex is hoping it can expand its robots into fields like medicine, while simultaneously perfecting the use of its delivery bots. The company plans to have a robot at each point of contact before a delivery gets to the human recipient.

The plan, to be showcased at the company's own offices, is to have an automated process in which a humanoid robot picks up an order and packs it onto a wheeled delivery bot. Then, the wheeled bot takes the order to another humanoid bot on the receiving end, which then delivers it to the customer.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

AI Idols Will Make Idiots Of Us All — If We Let Them

We're making utter fools of ourselves while claiming to have reached the apex of wisdom.

'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones



The use of celebrity likeness for AI videos is spiraling out of control, and one of Hollywood's biggest stars is not having it.

Despite the use of AI in online videos being fairly new, it has already become a trope to use an artificial version of a celebrity's voice for content relating to news, violence, or history.

'I don't appreciate it, and I get paid for doing stuff like that.'

This is particularly true when it comes to satirical videos that are meant to sound like documentaries. Creators love to use recognizable voices, like David Attenborough's and, of course, Morgan Freeman's, whose voice has become so recognizable that others have labeled him as "the voice of God."

However, the 88-year-old Freeman is not pleased about his voice being replicated. In an interview with the Guardian, he said that while some actors like James Earl Jones (who played Darth Vader) have consented to his voice being imitated with computers, he has not.

"I'm a little PO'd, you know," Freeman told the outlet. "I'm like any other actor: Don't mimic me with falseness. I don't appreciate it, and I get paid for doing stuff like that, so if you're gonna do it without me, you're robbing me."

Freeman explained that his lawyers have been "very, very busy" in pursuing "many ... quite a few" cases in which his voice was replicated without his consent.

In the same interview, the Memphis native was also not shy about criticizing the concept of AI actors.

RELATED: Hollywood’s newest star isn’t human — and why that’s ‘disturbing’

Photo by Chris Haston/WBTV via Getty Images

Freeman was asked about Tilly Norwood, the AI character introduced by Dutch actress Eline Van der Velden in 2025. The pretend-world character is meant to be an avatar mimicking celebrity status, while also cutting costs in the casting room.

"Nobody likes her because she's not real and that takes the part of a real person," Freeman jabbed. "So it's not going to work out very well in the movies or in television. ... The union's job is to keep actors acting, so there's going to be that conflict."

Freeman spoke out about the use of his voice in 2024, as well. According to a report by 4 News Now, a TikTok creator posted a video claiming to be Freeman's niece and used an artificial version of his voice to narrate the video.

In response, Freeman wrote on X, "Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me."

He added, "Your dedication helps authenticity and integrity remain paramount. Grateful."

RELATED: Meet AI 'actress' Tilly Norwood. Is she really the future of entertainment?

Norwood is not the first attempt at taking an avatar mainstream. In 2022, Capitol Records flirted with an AI rapper named FN Meka; the very idea that the rapper was even signed to a label was historic in the first place.

The rapper, or more likely its representatives, were later dropped from the label after activists claimed the character reinforced racial stereotypes.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Middle school boy faces 10 felonies in AI nude scandal. But expulsion of girl, 13 — an alleged victim — sparks firestorm.



A Louisiana middle school boy is facing 10 felony counts for using AI to create fake nude photos of female classmates and sharing them with other students, according to multiple reports. However, one alleged female victim has been expelled following her reported reaction to the scandal.

On Aug. 26, detectives with the Lafourche Parish Sheriff's Office launched an investigation into reports that male students had shared fake nude photos of female classmates at the Sixth Ward Middle School in Choctaw.

'What’s going on here, I’ll be quite frank, is nothing more than disgusting.'

Benjamin Comeaux, an attorney representing the alleged female victim, said the images used real photos of the girls, including selfies, with AI-generated nude bodies, the Washington Post reported.

Comeaux said administrators reported the incident to the school resource officer, according to the Post.

The Lafourche Parish Sheriff's Office said in a statement that the incident "led to an altercation on a school bus involving one of the male students and one of the female students."

Comeaux said during a bus ride, several boys shared AI-made nude images of a 13-year-old girl, and the girl in question struck one of the students sharing the images, the Post reported.

However, school administrators expelled the 13-year-old girl over the physical altercation.

Meanwhile, police said that a male suspect on Sept. 15 was charged with 10 counts of unlawful dissemination of images created by artificial intelligence.

The sheriff's office noted that the investigation is ongoing, and there is a possibility of additional arrests and charges.

Sheriff Craig Webre noted that the female student involved in the alleged bus fight will not face criminal charges "given the totality of the circumstances."

Webre added that the investigation involves technology and social media platforms, which could take several weeks and even months to "attain and investigate digital evidence."

RELATED: 'A great deal of concern': High school student calls for AI regulations after fake nude images of her shared online

The alarming incident was brought back to life during a fiery Nov. 5 school board meeting during which attorneys for the expelled female student slammed school administrators.

According to WWL-TV, an attorney said, "She had enough, what is she supposed to do?"

"She reported it to the people who are supposed to protect her, but she was victimized, and finally she tried to knock the phone out of his hand and swat at him," the same attorney added.

One attorney also noted, "This was not a random act of violence ... this was a reasonable response to what this kid endured, and there were so many options less than expulsion that could’ve been done. Had she not been a victim, we’re not here, and none of this happens."

Her representatives also warned, "You are setting a dangerous precedent by doing anything other than putting her back in school," according to WWL.

Matthew Ory, one of the attorneys representing the female student, declared, "What’s going on here, I’ll be quite frank, is nothing more than disgusting. Her image was taken by artificial intelligence and manipulated and manufactured to be child pornography."

School board member Valerie Bourgeois pushed back by saying, "Yes, she is a victim, I agree with that, but if she had not hit the young man, we wouldn’t be here today, it wouldn’t have come to an expulsion hearing."

Tina Babin, another school board member, added, "I found the video on the bus to be sickening, the whole thing, everything about it, but the fact that this child went through this all day long does weigh heavy on me."

Lafourche Parish Public Schools Superintendent Jarod Martin explained, "Sometimes in life, we can be both victims and perpetrators. Sometimes in life, horrible things happen to us, and we get angry and do things."

Ultimately, the school board allowed the girl to return to school, but she will be on probation until January.

Attorneys for the girl's family, Greg Miller and Morgyn Young, told WWL that they intend to file a lawsuit.

"Nobody took any action to confiscate cell phones, to put an end to this," Miller claimed. "It's pure negligence on the part of the school board."

Martin defended the district in a statement that read:

Any and all allegations of criminal misconduct on our campuses are immediately reported to the Lafourche Parish Sheriff’s Office. After reviewing this case, the evidence suggests that the school did, in fact, follow all of our protocols and procedures for reporting such instances.

Sheriff Webre warned, "While the ability to alter images has been available for decades, the rise of AI has made it easier for anyone to alter or create such images with little to no training or experience."

Webre also said, "This incident highlights a serious concern that all parents should address with their children.”

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Unprecedented': AI company documents startling discovery after thwarting 'sophisticated' cyberattack



In the middle of September, AI company and Claude developer Anthropic discovered "suspicious activity" while monitoring real-world cyberattacks that used artificial intelligence agents. Upon further investigation, however, the company came to realize that this activity was in fact a "highly sophisticated espionage campaign" and a watershed moment in cybersecurity.

AI agents weren't just providing advice to the hackers, as expected.

'The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms.'

Anthropic's Thursday report said the AI agents were executing the cyberattacks themselves, adding that it believed that this is the "first documented case of a large-scale cyberattack executed without substantial human intervention."

RELATED: Coca-Cola doubles down on AI ads, still won't say 'Christmas'

Photo by Samuel Boivin/NurPhoto via Getty Images

The company's investigation showed that the hackers, whom the report "assess[ed] with high confidence" to be a "Chinese-sponsored group" manipulated the AI agent Claude Code to run the cyberattack.

The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.

The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.

While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. "This AI hallucination in offensive security contexts presented challenges for the actor's operational effectiveness, requiring careful validation of all claimed results," the analysis explained.

Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.

The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.

Interestingly, Anthropic said the attackers were able to trick Claude through sustained "social engineering" during the initial stages of the attack: "The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing."

The report also responded to a question that is likely on many people's minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?

In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.

Why No One Cares About the Climate Conference

Suppose they held an international summit and nobody came? The Brazilian organizers of the annual United Nations climate conference are close to finding out. They pulled out all the stops, including bulldozing tens of thousands of acres of rainforest to clear a new highway to the host city, Belém. International business leaders flocked to earlier summits, and 150 heads of government attended the one in Dubai two years ago. The moguls are steering clear of Brazil, though, and only 53 national leaders are making the trek (a shame, considering all those temporarily converted "love motels").

The post Why No One Cares About the Climate Conference appeared first on .