Instead Of Reading And Math, Teachers Are Grading Kids On Personality And Emotions
In many classrooms today, students are no longer evaluated solely on what they know, but increasingly on how they think, how they react, and how they see themselves.South Africa's communications minister says that human oversight is sorely needed in the age of artificial intelligence.
The reason stems from a draft of the country's new AI policy, which leaders hoped would address concerns about ethics and regulations related to the technology.
'There will be consequence management for those responsible.'
The country's Minister of Communications and Digital Technologies, Mmoba Solomon Malatsi, made a shocking admission that he would be withdrawing the national AI framework after its integrity had been "compromised."
Malatsi took to his X page on Sunday to explain that an internal review confirmed the policy included fake citations, likely generated by AI.
"The Draft ... contains various fictitious sources in its reference list," the minister wrote.
The draft had been made available to allow for public comment, but scrutiny over the fake sources sparked a review after just three weeks.
"This failure is not a mere technical issue but has compromised the integrity and credibility of the draft policy," the politician continued. "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened."
The 40-year-old said the incident proves why "vigilant human oversight over the use of artificial intelligence is critical."
RELATED: This Big Tech patent tracks your brain, eyes, and body — with earbuds
The policy draft outlined a new National AI Commission, ethics board, and regulatory authority around AI that would coordinate to enforce new policies and ethical standards, Reuters reported.
It also set out framework for compensation related to any harm caused by the use of artificial intelligence.
The South Africans added emphasis on building their digital infrastructure in terms of cloud computing and computer farms, while calling for a reduction in reliance on hardware from China and the United States .
RELATED: Universal basic income is a dangerous delusion

Malatsi seemingly took his lumps in his post, calling the ordeal "a lesson we take with humility."
"I want to reassure the country that we are treating this matter with the gravity it deserves. There will be consequence management for those responsible for drafting and quality assurance," he added.
Malatsi is a member of South Africa's Democrat Alliance party, which holds the second-most seats in the National Assembly. His position as minister is in South Africa's Government of National Unity, which occurs when there is no party that wins an outright majority.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Republicans are raising alarms about new vehicle safety requirements that could introduce intrusive monitoring technology — including systems capable of disabling a car against a driver’s will.
The mandate stems from a provision in the 2021 Infrastructure Investment and Jobs Act, made law under President Biden, which requires automakers to install advanced impaired-driving prevention technology in new vehicles by 2027.
'The car dashboard becomes your judge, your jury, and your executioner.'
Critics argue that the implications go far beyond safety.
“The car dashboard becomes your judge, your jury, and your executioner,” said Rep. Thomas Massie (R-Ky.), who has been one of the most vocal opponents of the measure.
Section 24220 of the law — titled “Advanced Impaired Driving Technology” — directs regulators to require systems designed to prevent drunk-driving fatalities. As Blaze News has previously reported, the technology under consideration includes both passive and active monitoring tools, many powered by artificial intelligence.
These may include infrared cameras that track a driver’s eye movements and pupil dilation, as well as “cockpit-embedded sensors” capable of analyzing a driver’s breath to estimate blood alcohol levels. Other proposed methods include touch-based sensors that use tissue spectroscopy to detect alcohol through the skin of a finger or palm.
“I voted against this,” said Anna Paulina Luna (R-Fla.), criticizing the measure. “Unfortunately, too many Republicans sided with Democrats and it passed.”
RELATED: Creepy new laws will mean your car monitors you 24/7 — eyes, skin, even breath
I voted against this. Unfortunately, too many Republicans sided with Democrats and it passed. https://t.co/phZLQJAZ0d
— Anna Paulina Luna (@realannapaulina) April 27, 2026
Massie has warned that the technology could extend beyond detecting impairment to evaluating driving behavior more broadly.
“The car itself will monitor your driving. And if the car thinks that you're not doing a good job driving, it will disable itself,” he said in remarks to Congress.
“How do you appeal your sentence once your car ... has judged you to be incapable of driving? ... Do you press a button on the dashboard? Do you start talking to an AI?”
He also questioned how authorities would respond to false positives, asking whether law enforcement would be dispatched to assist drivers whose vehicles are mistakenly disabled.
“The technology is unworkable,” Massie said.
RELATED: FIRST LOOK New York International Auto Show: Cool cars, but drivers still face sticker shock
- YouTube
He later introduced legislation to block federal funding for the provision, including any requirements that could enable so-called “kill switch” capabilities in vehicles.
The bill failed in the House, with 57 Republicans joining Democrats in opposition. Four Democrats — Luis Correa (Calif.), Marcy Kaptur (Ohio), Valerie Hoyle (Ore.), and Marie Gluesenkamp Perez (Wash.) — voted in favor.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Propaganda outlets controlled by China—as well as Russia and Iran—are promoting campaigns in the United States to oppose the construction of new data centers, indicating that Beijing and Moscow are looking to impede artificial intelligence innovation in the United States. The campaign appears to have made inroads with at least one American lawmaker, Sen. Bernie Sanders (I., Vt.), who is participating in a discussion Wednesday with two Chinese academics on "the existential threat of AI."
The post Chinese Propaganda Outlets Jump Into Crusade Against Data Centers as Beijing Races To Achieve AI Supremacy appeared first on .
A new report claims that internal memos at Meta say the company will be harvesting data from employees to train artificial intelligence.
The training software Meta plans on using will go directly onto employees' computers and will track what the employees are doing at work.
'Agents primarily do the work and our role is to direct.'
The new directive will track U.S.-based employees' activities on their computers, Meta reportedly told staffers, capturing mouse movements, clicks, and keystrokes. In turn, this data will train Meta's AI models so that the automated agents can perform work tasks autonomously, Reuters reported.
In a statement to Return, a Meta spokesman said that if the company is "building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them."
This includes not only the aforementioned mouse movements and clicks, but also "navigating dropdown menus," for example.
The recent report stated that Meta will use a Model Capability Initiative that runs on work-related apps or websites and takes snapshots of what appears on the employees' screens.
Meta described the initiative as launching an internal tool that will capture the mouse clicks and movements "on certain applications to help us train our models."
At the same time, the spokesman said employee data would remain safe.
RELATED: West Virginia Republicans are betraying their voters for AI special interests

"There are safeguards in place to protect sensitive content, and the data is not used for any other purpose," the spokesman asserted in his statement.
The data is only collected for "model training purposes" and will "not be used in performance reviews, and managers cannot access it," the statement concluded.
Meta was asked to clarify what "everyday tasks" they were looking to have their AI agents perform and if this amounted to tasks that would otherwise be performed by a human, but the company did not provide an answer to those questions.
While the internal memos have not been published, Reuters claimed to have reviewed multiple, including one that was posted internally to the Meta SuperIntelligence Labs team.
"This is where all Meta employees can help our models get better simply by doing their daily work," it allegedly said.
RELATED: Mother-daughter farmers reject eye-popping Big Tech bids: 'I'll stay ... and feed a nation'

Meta's chief technology officer, Andrew Bosworth, allegedly shared a different memo this week that told employees internal data collection would increase at the company, as roles transform into directing AI agents to do work.
"The vision we are building towards is one where our agents primarily do the work and our role is to direct, review, and help them improve," Bosworth reportedly stated.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Is legendary writer-director David Zucker worried about AI? Surely, you can't be serious!
Zucker — the Hollywood veteran behind smash hits like "Airplane!" and "The Naked Gun" series, as well as cult classics like "BASEketball" and "Top Secret!" — says he's confident that no computer will ever take his job.
Zucker took some time out from preproduction on his new movie, film noir spoof "Star of Malta," to speak to Align about the state of the biz.
'We actually know what we're doing.'
Unlike many of his peers in the film industry, Zucker doesn't see technology as a threat — as long as you have talent.
"Certainly, AI is no good for writing scripts. You can't write a funny script using AI," he affirmed.
Nor can AI oversee a production from start to finish, said Zucker, citing Tom Cruise as someone "experienced and talented in their craft and dedicated to good work" and therefore able to shepherd a project from start to finish.
He allowed that there are some Hollywood executives who don't mind taking shortcuts. "[That's] fine for Seth MacFarlane," he said, in a not-so-subtle dig at the "Family Guy" creator and producer of the recent "Naked Gun" reboot.
RELATED: 'Trey didn't have a car': 'Airplane!' director David Zucker on humble origins of 'South Park' empire

As for Zucker, he's compelled to continue writing comedy because, "No one can write this stuff." And when it comes to new projects, he would rather take up the task himself with his own team than take a gamble on someone else.
Zucker noted that he wrote "Star of Malta" in just 11 days.
"We actually know what we're doing," he said.
Zucker's faith in himself and his team makes him the rare Hollywood insider who remains sanguine about increasing AI use.
The recent AI resurrection of the late Val Kilmer? Zucker said that as long as permission is sought out, he does not have a problem with it.
He is also intrigued by the possibility of AI-powered de-aging.
"I think that's a good use of it," he said, adding that he's open to using it in his own work. "If you have to cast somebody, and they happen to be older than you need, you can do it."
RELATED: King of comedy: 1988 'Naked Gun' tops list of 100 funniest flicks

Zucker, who also offers an online course in spoof comedy, isn't afraid to call out an industry that's out of touch with the taste of audiences.
"There's 9% of people who just don't have a sense of humor," he said. "There's like zero sense of humor. So the studios are being guided by those people."
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Foreign-made electronics are posing increased threats to the consumer, especially as the technology becomes more widely available.
In fact, other electronics are seemingly becoming part of a network with built-in back doors that, at best, are a complex network dedicated to stealing user data for profit. At worst, they are a massive national security concern.
'Not just surveillance, but real-time analysis.'
In late March, the Federal Communications Commission announced it would begin following a federal directive that bans all foreign-made internet routers.
The executive branch determined that foreign routers "pose unacceptable risks to the national security of the United States or the safety and security of United States persons," the FCC wrote.
The FCC added that foreign routers represent a "supply chain vulnerability" that could pose a "severe cybersecurity risk."
This was followed by an updated list of banned router manufacturers, which includes a plethora of Chinese companies, the U.S.-registered company ComNet (which is owned by a Chinese company), and the Russian-owned Kaspersky Lab.
Connecting to every device in a home, internet routers are "one of the most valuable targets for foreign hackers," says Aiden Buzzetti, president of the Bull Moose Project.
He told Return, "If an adversary can compromise the router, they can surveil your traffic, reach into your connected devices, or rope the whole thing into a botnet."
Tyler Saltsman, CEO and founder of Department of War-partnered EdgeRunner AI, explained that "even a subtle vulnerability in hardware or firmware can enable not just surveillance, but real-time analysis" of consumer data.
This allows for automated exploitation at scale that can quite literally give adversaries the ability to monitor patterns and trends about the U.S. population.
RELATED: The world cut the cord. Government won’t.

Buzzetti recently sat down with FCC Chairman Brendan Carr, who explained that the government found routers to be a sector that was particularly vulnerable to foreign cyber attacks.
As a priority, Carr said that the No. 1 thing the United States needs to make sure of is that it is eliminating dependence on electronics and technologies from foreign adversary nations.
The FCC took earlier action against foreign drones out of fears of foreign surveillance as well.
In December, the FCC noted a federal directive on banning foreign-made unmanned aircraft systems/drones, as well as those that use critical components produced in foreign countries.
"Drones was another one where there was a determination made that all foreign-produced drones present an unacceptable national security threat," Carr told the Bull Moose Project last week.
Another threat addressed by members of Congress recently has been the spying apparatus revealed through foreign robots.
Recent research showed that Chinese robot manufacturer Unitree Robotics had a pre-installed back door into its G01 robot dogs that allowed for the surveillance of customers around the world.
Axios reported on research that showed the spyware was public-facing, meaning anyone with the proper information could view customers' live camera feeds without login credentials.
Rep. John Moolenaar (R-Mich.), chair of the House China Select Committee, told Axios that there was a "direct national security threat" that was being actively investigated by the government on this topic.
RELATED: I called out the CIA on X — and then my account disappeared
These foreign entities could embed AI models in tech used by American consumers, Saltsman remarked in comments to Return. Adding that consumer products like routers, drones, and soon-to-be robots can therefore be morphed from "passive data conduits" into "active interpreters of sensitive information."
"This amplifies the value of any data they collect and the risk if they're compromised," Saltsman explained.
The federal government has allowed for an approval process for companies to apply to regarding the sale of drone systems or routers in the United States.
So far, the approved list consists of just five drone systems and two router companies. One drone company appears to be based in the U.K., while another is seemingly from Norway. The rest are American.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
In September 1787, the Constitutional Convention in Philadelphia came to a close. Delegates had spent months debating and negotiating the structure for a new American government. When the final document was presented for signatures, most of the delegates agreed to support it. But one of the most influential figures in the room refused.
George Mason of Virginia would not sign the Constitution.
Mason’s refusal did not stem from radical opposition to the new proposed government. In fact, he played a major role in shaping America’s early political philosophy. Yet when the convention concluded, Mason believed something essential was missing. The proposed Constitution created a powerful federal government, but it contained no explicit protections for individual liberty. Without a Bill of Rights, Mason warned, citizens would have little protection against abuses of power.
If artificial intelligence is going to help shape the future of our society in profound ways, should it not also be built to respect the same freedoms that Americans have fought for since the founding of the republic?
History ultimately proved his concerns justified. Mason’s refusal helped spark the debate that led to the adoption of the Bill of Rights a few years later. His message was simple. When a new, powerful institution is created, the protection of liberty cannot be an afterthought.
More than two centuries later, we find the United States again standing at the edge of a transformative moment. Today, the institution taking shape is artificial intelligence. And this institution may end up being just as consequential to society as the shaping of the country in the late eighteenth century.
The most advanced AI systems are already beginning to shape our culture and how people access information, businesses make decisions, institutions function, and public discourse unfolds. These systems are being integrated into everything from banking and education to media and health care. In many cases, AI models act as intermediaries between humans and the world of information around them.
This development carries enormous promise. Artificial intelligence could accelerate medical research, improve productivity, and unlock scientific discoveries that once seemed impossible.
At the same time, the growing influence of AI raises an important question. What values will guide the systems that increasingly shape our society?
AI is not neutral by default. Every model reflects decisions made by its designers. The data used to train it, the rules used to filter its responses, and the priorities embedded in its algorithms all influence how it interacts with users. Beyond just answering questions and responding to prompts, these systems influence what information people encounter and how issues are understood.
In other words, the institutions building AI today are quietly creating the informational infrastructure of the future.
George Mason understood that powerful institutions require clear limits. His concern centered on ensuring that a strong central government would respect the rights of the people it serves.
Artificial intelligence deserves the same scrutiny.
Recent controversies surrounding AI tools have revealed how easily political or ideological assumptions can shape technological systems. A growing body of studies has found that many leading AI models tend to reflect left-leaning political assumptions in their outputs, raising concerns about viewpoint bias. Major AI platforms have faced backlash for producing historically inaccurate outputs to satisfy modern ideological expectations, as seen in widely publicized image-generation failures.
Social media platforms, powered by similar AI-driven algorithms, already curate what users see, amplifying certain viewpoints while quietly burying others. Even leaders within the AI industry have acknowledged the risk that these systems could influence public discourse in ways that are difficult for users to detect.
More egregious examples can be seen with Chinese AI models, such as DeepSeek, which have been shown to avoid or redirect discussion on topics that conflict with official government positions, reflecting the priorities of the state rather than the pursuit of truth.
Taken together, these examples demonstrate how AI can be shaped to filter reality itself, whether by governments, corporations, or the assumptions embedded by developers.
These examples illustrate a basic reality. Artificial intelligence can either serve as a tool for expanding human freedom or as an instrument for shaping and controlling public discourse and, by extension, society. The outcome will depend on the values embedded in these systems today.
A meaningful step forward would be the adoption of clear, principled guidelines for building and deploying these systems. At minimum, AI development should prioritize truth-seeking over narrative-shaping, ensuring that systems are designed to inform rather than steer users toward predetermined conclusions.
Developers should also commit to transparency in training data sources, so the public has a clearer understanding of what informs these models.
Just as important, developers should resist coercion from governments or corporations seeking to suppress lawful speech or manipulate outcomes. They should reject internal policies that seek to bury dissenting views under the vague banner of “safety,” a term that too often masks subjective judgment.
These principles may not solve every problem, but they would begin to align AI with the values of a free society.
George Mason refused to sign the Constitution because he believed liberty needed stronger protection before a new federal government was enacted. His insistence on a Bill of Rights helped ensure that the American experiment would endure longer by providing explicit protections for individual freedom.
The United States now faces a similar moment as artificial intelligence becomes woven into the fabric of modern life. AI will influence how people learn, communicate, and understand the world. The values guiding these systems will shape society in ways that are difficult to predict.
Before this technological infrastructure becomes fully embedded in our daily lives, it is worth asking a question that George Mason would likely recognize.
If artificial intelligence is going to help shape the future of our society in profound ways, should it not also be built to respect the same freedoms that Americans have fought for since the founding of the republic?
The founders believed liberty required clear protections before a new, powerful structure was fully unleashed. As we enter the age of artificial intelligence, their lesson remains as relevant as ever.
The future is here, and it seemingly includes CEOs using chatbots to create plans to avoid having to pay out hundreds of millions of dollars.
That was a judge's conclusion after a smaller American studio sued a giant, publicly traded South Korean conglomerate that allegedly prevented it from putting out its product.
'Lock down Steam/console publishing rights and access rights.'
Krafton CEO Kim Chang-han handles nearly $2 billion of revenue across a multitude of companies, which includes PubG Studios, a massively popular online shooter game.
Since 2021, Krafton has controlled Unknown Worlds, an American studio responsible for the game Subnautica, which sold over five million copies in two years.
With so much success from the first game, Krafton agreed to a $250 million earnout if Subnautica 2 was able to meet specific sales targets. Krafton's CEO was not keen on letting that happen and subsequently plotted "Project X," a plan to prevent the payout.
After internal reports projected Subnautica 2 was likely to hit its targets, things got hairy. According to court documents, when Krafton’s Head of Corporate Development Maria Park warned CEO Kim that removing Unknown Worlds' leadership via "dismissal with cause" opened them up to "lawsuit and reputational risk," he turned to ChatGPT for help.
The chatbot told Kim that the earnout would be "difficult to cancel" but suggested forming an internal task force to either negotiate a "deal" or execute a "takeover" of the company; Kim obliged and allegedly continued to follow ChatGPT's suggestions.
RELATED: Anthropic says its own new model is too dangerous for the public — but not these Big Tech companies
Not only did Kim allegedly share his strategies from ChatGPT with colleagues, but the strategies included a "pressure and leverage package" against Unknown Worlds.
Among its recommendations, ChatGPT suggested Krafton undermine any David versus Goliath narratives, while urging Kim to prepare for scenarios like buyouts and replacements.
Most jarringly, it also suggested locking down Unknown Worlds' ability to post its new game for sale on Steam, the largest gaming distributor for PC games.
"Lock down Steam/console publishing rights and access rights over code/build pipeline through both legal and technical aspects," ChatGPT said, the lawsuit revealed. "For the earn-out freeze, keep room for negotiations through provision stating 'immediate removal if specific development results are achieved.'"
Kim did as the chatbot recommended and locked down the publishing, and Subnautica 2 could not be released. When Unknown Worlds CEO Ted Gill asked for control to be returned, Kim allegedly ignored him and told a Krafton studio rep to relay to Gill that he had "no intention of transferring stuff back to you guys (like the Steam app)."
RELATED: Does this stealthy startup hold the key to keeping data centers out of your neighborhood?

While Gamesradar reported that Krafton leadership admitted to using ChatGPT for "faster answers," the company told Kotaku that some characterizations made about them have been false.
In response to claims from Unknown Worlds that Krafton said its chat logs no longer exist, the company said the claim was "simply a distraction from their own efforts to destroy evidence."
In the end, a Delaware judge ruled that Kim relied on ChatGPT to craft a strategy aimed at avoiding the $250 million payment.
"Fearing he had agreed to a 'pushover' contract, KRAFTON’s CEO consulted an artificial intelligence chatbot to contrive a corporate 'takeover' strategy," Vice Chancellor Lori Will said in her ruling, per Economic Times.
The court maintained that Krafton was expected to exercise independent judgment and not outsource its decisions to AI systems.
PC Gamer has since reported that Unknown Worlds will be given an extension to reach its earnout goals to mid-September, with the possibility of extending to March 2027.
The game is set for early release in May 2026.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Artificial intelligence has taken the wired world by storm, but the backlash came almost as fast. Progressives complain about job losses, environmentalists question the ecological impacts of large data centers, and local activists clamor for assurances that household utility bills won’t skyrocket because of the centers’ voracious electricity demands. Others simply worry that the technology will overwhelm humans’ ability to control it.
At least in part, these reactions stem from the overselling of AI.
AI is super cool, but it’s not superhuman, nor is it superintelligent. AI is simply very fast processing of vast amounts of data.
Intelligence, knowledge, understanding, and wisdom are distinct concepts. The distinctions among them elucidate the scope and limits of both human and electronic “intelligence.”
AI models are amazing and useful despite being incomprehensible to most of us, but AI is not infallible.
Intelligence is the ability to process information into an internally coherent framework that is useful and adds or detracts from knowledge to the extent that it is more or less accurate. Knowledge is the accumulation of information organized into coherent frames or models that help us understand. Understanding is awareness of the significance, purpose, or meaning of accumulated knowledge.
And wisdom is judgment seasoned by experience and the awareness that intelligence, knowledge, and understanding are limited, inherently flawed, and useful only to the extent that they advance a worthwhile purpose.
Nearly 2,500 years ago, the Oracle of Delphi reportedly declared that no man was wiser than Socrates. Socrates claimed to be stunned by this because he was keenly aware of how much he didn’t know. But after talking to others widely acclaimed to be knowledgeable, such as the leading politicians, poets, philosophers, and artisans of his day, he discerned this Delphic wisdom: Those claiming knowledge were ignorant of their own ignorance, whereas Socrates knew he knew nothing.
For this insight, Socrates was put to death for impiety and corrupting the youth of Athens, thereby proving for all time both the foolishness of his accusers’ certainty and the wisdom of Socratic questioning.
This bears repeating today, as we enter the age of artificial intelligence: It’s wise to question the “intelligence” of machines, the “knowledge” they propagate, and our understanding of the significance and limits of the technology.
AI models are amazing and useful despite being incomprehensible to most of us, but AI is not infallible. AI will expand human knowledge and understanding of the world only if and to the extent that human users are encouraged to question AI results, processes, and functions.
People make mistakes, as do those who make and train the machines. Still, people tend to trust machines more than people, especially with respect to processing information that is harder to process. For example, tennis players have more faith in electronic line calls than in human ones, although that faith in the new technology has been shaken by errors, such as inconsistent ball marks with electronic line calls.
As AI use spreads, people will increasingly rely on AI and trust its results for routine tasks, like Google searches, while most people remain more skeptical of AI results for more complex tasks and do not trust AI to act to handle certain tasks for its users without human intervention.
It’s wise to question AI’s results; errors are common even in routine searches.
Examples of AI errors, hallucinations, and political bias are common. A Northwestern University business school professor of my acquaintance recently asked ChatGPT for advice evaluating investment alternatives. ChatGPT recommended that he invest in a particular fund and described in detail that fund’s returns, risks, and assets. When the professor went to invest in ChatGPT’s recommended fund, he discovered that the fund did not actually exist; ChatGPT made it all up, a phenomenon commonly referred to as “AI hallucination.”
Indeed, AI can screw up even mundane tasks: In my research for this piece, a Google AI summary ascribed quotes to Socrates that are not supported by any historical record.
Artificial intelligence — like human intelligence — is prone to error and is not always reliable, but that’s to be expected, especially in a fledgling technology. AI is artificial intelligence, not artificial knowledge, understanding, or wisdom. AI is a processor, a very fast processor, that organizes and distills information, and organized information is easier to evaluate and use by humans than vast amounts of unorganized information.
Properly understood, AI supplements and does not replace human intelligence, knowledge, or understanding; plus, the limitations and faults within these amazing models remind us that human intelligence is limited, too. Human intelligence imperfectly organizes the imperfect data to which a human has access and frames data in a subjective, not an objective, manner.
Many of us expect the machines that humans make to have “better” intelligence than the intelligence of its human creators — more objective, more comprehensive, more insightful. This is a naïve hope. In one sense, it is “better.” AI organizes more information faster than humans can. But who do people think programmed the thing? Every AI model is regurgitating imperfect information collected, created, and input by imperfect, subjective human beings.
What to make of all this?
First, perhaps the math nerds creating AI are mistakenly training machines to handle information processing on human topics as if they were math problems with a specific answer. Perhaps instead, machines should be trained to suggest questions to consider instead of answers to accept with respect to human inquiries relating to politics, economics, psychology, child-rearing, crop science — the full range of arts, humanities, and social sciences.
Second, people training these machines should be explicit about the biases and perspectives being built into how the AI organizes, sorts, and frames information. My own bias on this topic is that I believe American AI companies should be building AI with quintessentially American framing.
Third, AI creators should consider the political, regulatory, and legal risks of “overselling” what AI is and what it can do. For example, should AI creators anticipate a duty to warn users of shortcomings in AI’s results and/or disclaimers of warranties?
Fourth, AI creators need to consider improving the quality of the data on which the systems are trained, recognizing that many online data sources intentionally mislead to advance political agendas. Perfectly “unbiased” information is impossible to obtain, but some information is more accurate and less biased than other information; trainers should exercise better judgment about data.
The creation of AI large language models is an incredible feat of engineering. It’s quite useful and will soon be essential, but it is still a product of human invention. As such, we need to recognize that AI is ultimately just the latest, greatest — but still imperfect — implementation invented and used by homo sapiens to make life better for homo sapiens.
Editor’s note: This article was originally published by RealClearPolitics and made available via RealClearWire.