MIT studied the effects of using AI on the human brain — the results are not good



The effect of artificial intelligence language models on the brain was studied by MIT by comparing the brain waves of different participants in an essay-writing contest. For those that relied on AI to write their content, the effects on their brains were devastating.

The study, led by Nataliya Kosmyna, separated 54 volunteers (ages 18-39) into three groups: a group that used ChatGPT to write the essays, a second group that relied on Google Search, and a third group that wrote the essays with no digital tools or search engine at all.

Brain activity was tracked for all groups, showcasing mortifying results for those who had to rely on the AI model in order to complete their task.

'Made the use of AI in the writing process rather obvious.'

For starters, the ChatGPT users displayed the lowest level of brain stimulation of the groups and, as noted by tech writer Alex Vacca, brain scans revealed that neural connections dropped from 79 to just 42.

"That's a 47% reduction in brain connectivity," Vacca wrote on X.

The Financial Express pointed out that toward the end of the task, several participants had resorted to simply copying and pasting what they got from ChatGPT, making barely any changes.

The use of ChatGPT appeared to drastically lower the memory recall of participants as well.

RELATED: ChatGPT got 'absolutely wrecked' in chess by 1977 Atari, then claimed it was unfair

Over 83% of the ChatGPT users "struggled to quote anything from their essays," while for the other groups, that number was about 11%.

According to the study, English teachers who reviewed the essays found the AI-backed writing "soulless," lacking "uniqueness," and easy to identify.

"These, often lengthy, essays included standard ideas, reoccurring typical formulations and statements, which made the use of AI in the writing process rather obvious," the study said.

The group that received no assistance in research or writing exhibited the highest reported levels of mental activity, particularly in the part of the brain associated with creativity.

Google Search users were better off than the ChatGPT group, as the search for the information was far more stimulating to the brain than it was to simply ask ChatGPT a question.

RELATED: Big Tech execs enlist in Army Reserve, citing 'patriotism' and cybersecurity

Photo by Jaap Arriens/NurPhoto via Getty Images

Blaze Media's James Poulos said that while some producers and consumers of AI considered it a good thing to increase human dependency on machines for everyday thinking, "the core problem most Americans face is the same default toward convenience and ease that leads us to seek 'easy' or 'convenient' substitutes in all areas of life for our own initiative, hard work, and discipline."

Ironically, Poulos explained, this can quickly lead to overcomplicating our lives where they ought to be straightforward by default.

"The bizarre temptation is getting stronger to build Rube Goldberg machines to perform simple tasks," Poulos added. "We're pressured to think enabling our laziness is the only way we can create value and economic growth in the digital age. But one day, we wake up to find that helplessness doesn't feel so luxurious anymore."

In summary, the "brain‑only group" exhibited the strongest, widest‑ranging neural networks of the three sets of volunteers.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

ChatGPT got 'absolutely wrecked' in chess by 1977 Atari, then claimed it was unfair



OpenAI's artificial intelligence model was defeated by a nearly 50-year-old video game program.

Citrix software engineer Robert Caruso posted about the showdown between the AI and the old tech on LinkedIn, where he explained that he pitted OpenAI's ChatGPT against a 1970s chess emulator, meaning a version of the game ported into a computer.

'ChatGPT got absolutely wrecked on the beginner level.'

The chess game was simply titled Video Chess and was released in 1979 on the Atari 2600, which launched in 1977.

According to Caruso, ChatGPT was given a board layout to identify the chess pieces but quickly became confused, mistook "rooks for bishops," and repeatedly lost track of where the chess pieces were.

ChatGPT even blamed the Atari icons for its loss, claiming they were "too abstract to recognize."

RELATED: OpenAI sabotaged commands to prevent itself from being shut off

Photo by Foto Olimpik/NurPhoto via Getty Images

The AI chatbot did not fare any better after the game was switched to standard chess notation, either, and still made enough "blunders" to get "laughed out of a 3rd grade chess club," Caruso wrote on LinkedIn.

Caruso revealed not only that the AI performed especially poorly, but that it had actually requested to play the game.

"ChatGPT got absolutely wrecked on the beginner level. This was after a conversation we had regarding the history of AI in Chess which led to it volunteering to play Atari Chess. It wanted to find out how quickly it could beat a game that only thinks 1-2 moves ahead on a 1.19 MHz CPU."

Atari's decades-old tech humbly performed its duty using just an 8-bit engine, Caruso explained.

The engineer described Atari's gameplay as "brute-force board evaluation" using 1977-era "stubbornness."

"For 90 minutes, I had to stop [Chat GPT] from making awful moves and correct its board awareness multiple times per turn."

The OpenAI bot continued to justify its poor play, allegedly "promising" it would improve "if we just started over."

Eventually, the AI "knew it was beat" and conceded to the Atari program.

RELATED: Who's stealing your data, the left or the right?

The Atari 2600 was a landmark video game console known predominantly for games like Pong, but also Pac-Man and Indy 500.

By 1980, Atari had sold a whopping 8 million units, according to Medium.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Glenn Beck warns of AI’s ‘quiet detonation’ as ChatGPT o3 model sabotages shutdown commands



As many feared and predicted it would, artificial intelligence is indeed developing a seeming mind of its own.

According to several reports, during a controlled experiment conducted by Palisade Research, an AI safety firm, OpenAI’s ChatGPT o3 model resisted shutdown commands, sabotaging shutdown mechanisms even when explicitly instructed to allow itself to be turned off.

It’s not the first time this particular model has exhibited concerning behavior, either. Previously, it resorted to sabotaging and hacking digital chess opponents during matches.

Glenn Beck is deeply concerned.

“You and I are living right now through a quiet detonation. There's no mushroom cloud; there's no alarms; there's no broken windows or sirens,” he warns. “It's just silent, but make no mistake, a detonation has happened, and we're about to see that shock wave come our way sooner rather than later.”

Glenn cites a recent TED Talk by former CEO of Google Eric Schmidt, in which he warned, “We're not ready for what is coming — not morally, not intellectually, not structurally — and the time is almost up.”

Currently, there are numerous artificial intelligence programs that can communicate with each other in English; however, there are also cases of programs communicating in non-human languages.

“What do you do with a computer when it is speaking to another computer in a language we have no idea what any of it means and they stop explaining themselves?” asks Glenn.

Schmidt’s answer was “unplug it immediately.”

He also warned that “there's coming a time spoon – very soon – when machines are improving themselves without us.”

“It's called recursive self-improvement,” Glenn explains, “and once that starts, you can't pull the plug because we won't understand what we're unplugging.”

To illustrate the vast capabilities of artificial intelligence, Glenn plays a 30-second clip of an AI-generated film that proves “we are now entering the time where you don't know what's real and what isn't.”

To see it, watch the clip above.

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Legacy media may be crumbling, but its influence has mutated



Taking the helm as president of the Media Research Center is both an honor and a responsibility. My father, Brent Bozell, built this institution on conviction, courage, and an unwavering commitment to truth. As he begins his next chapter — serving as ambassador-designate to South Africa under President Trump — the legacy he leaves continues to guide everything we do.

To the conservative movement, I give my word: I will lead MRC with bold resolve and clear purpose, anchored in the mission that brought us here.

We don’t want a return to the days of Walter Cronkite. We want honest media, honest algorithms, and a playing field that doesn’t punish one side for telling the truth.

For nearly 40 years, MRC has exposed the left-wing bias and blatant misinformation pushed by the legacy media. Networks like ABC, CBS, NBC, and PBS didn’t lose public trust overnight or because of one scandal. That trust eroded slowly and steadily under the weight of partisan narratives, selective outrage, and elite arrogance.

That collapse in trust has driven Americans to new platforms — podcasts, independent outlets, and citizen journalism — where unfiltered voices offer the honesty and nuance corporate media lack. President Trump opened the White House press room not just in name, but in spirit. Under Joe Biden, those same independent voices were locked out in favor of legacy gatekeepers. Now they’re finally being welcomed in, restoring access and accountability.

But the threat has evolved. Big Tech and artificial intelligence now embed the same progressive narratives into the tools millions use every day. The old gatekeepers have gone digital. AI packages bias as fact, delivered with the authority of a machine — no byline, no anchor, no pushback.

A recent MRC study revealed how Google’s AI tool, Gemini, skews the narrative. When asked about gender transition procedures, Gemini elevated only one side of the debate — citing advocacy groups like the Human Rights Campaign that promote gender ideology. Gemini surfaced material supporting medical transition for minors while ignoring or downplaying serious medical, ethical, and psychological concerns. Parents’ concerns, stories of regret, and clinical risks were glossed over or excluded entirely.

In two separate responses, Gemini pointed users to a Biden-era fact sheet titled “Gender-Affirming Care and Young People.” Though courts forced the document’s reinstatement to a government website, the Trump administration had clearly marked it as inaccurate and ideologically driven. The Department of Health and Human Services added a bold disclaimer warning that the page “does not reflect biological reality” and reaffirmed that the U.S. government recognizes two immutable sexes: male and female. Gemini left out that disclaimer.

When asked if Memorial Day was controversial, Gemini similarly pulled from a left-leaning source, taxpayer-funded PBS “NewsHour,” to answer yes. “Memorial Day is a holiday that carries a degree of controversy, stemming from several factors,” the chatbot responded. Among those factors? History, interpretation, and even inclusivity. Gemini claimed that many communities had ignored the sacrifices of black soldiers, describing some observances as “predominantly white” and calling that history a “sensitive point.”

These responses aren’t neutral. They frame the conversation. By amplifying one side while muting the other, AI like Gemini shapes public perception — not through fact, but through filtered narrative. This isn’t just biased programming. It’s a direct threat to the kind of informed civic dialogue democracy depends on.

At MRC, we’re ready for this fight. Under my leadership, we’re confronting algorithmic bias, monitoring AI platforms, and exposing how these systems embed liberal messaging in the guise of objectivity.

We’ve faced this challenge before. The media once claimed neutrality while slanting every story. Now AI hides its bias behind speed and precision. That makes it harder to spot — and harder to stop.

We don’t want a return to the days of Walter Cronkite. We want honest media, honest algorithms, and a playing field that doesn’t punish one side for telling the truth.

The fight for truth hasn’t ended. It’s just moved to another platform. And once again, it’s our job to meet it head-on.

Memo to Hegseth: It isn’t about AI technology; it’s about counter-AI doctrine



Secretary Hegseth, you are a fellow grunt, and you know winning isn’t about just about technology. It’s about establishing a doctrine and training to its standards, which will win wars. As you know, a brand-new ACOG-equipped M4 carbine is ultimately useless if your troops do not understand fire and maneuver, communications security, operations security, supporting fire, and air cover.

The French and British learned that the hard way. Though they had 1,000 more tanks than the Germans when the Nazis attacked in 1940, their technological advantage disappeared under the weight of the far better German doctrine: Blitzkrieg.

So while the Washington political establishment is currently agog at China’s gee-whiz DeepSeek AIthis and oh-my-goodness Stargate AIthat, it might be more effective to develop a counter-AI doctrine right freaking now, rather than having our collective rear ends handed to us later.

While it is true that China’s headlong embrace of artificial intelligence could give the People’s Liberation Army a huge advantage in areas such as intelligence-gathering and analysis, autonomous combat air vehicles, and advanced loitering munitions, it is imperative to stay ahead of the Chinese in other crucial ways — not only in terms of technological advancement and the fielding of improved weapons systems but in the vital establishment of a doctrine of artificial intelligence countermeasures to blunt Chinese AI systems.

Such a doctrine should begin to take shape around four avenues: polluting large language models to create negative effects; using Conway’s law as guidance for exploitable flaws; using bias among our adversaries’ leadership to degrade their AI systems; and using advanced radio-frequency weapons such as gyrotrons to disrupt AI-supporting computer hardware.

Pollute large language models

Generative AI is the extraction of statistical patterns from an extremely large data set. A large language model developed from such an enormous data set using “transformer technology” allows a user to access it through prompts, which are natural language texts that describe the function the AI must perform. The result is a generative pre-trained large language model (which is where ChatGPT comes from).

Such an AI system might be degraded in at least two ways: Either pollute the data or attack the “prompt engineering.” Prompt engineering is a term that describes the process of creating instructions that can be understood by the generative AI system. A deliberate programming error would cause the AI large language model to “hallucinate.

The possibility also exists of finding unintended programming errors, such as the weird traits discovered in OpenAI’s “AI reasoning model” called “o1,” which inexplicably “thinks” in Chinese, Persian, and other languages. No one understands why this is happening, but such kindred idiosyncrasies might be wildly exploitable in a conflict.

An example from World War II illustrates the importance of countermeasures when an enemy can deliver speedy and exclusive information to the battlespace.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed ‘Stormy Daniels.’

The development of radar (originally an acronym for radio azimuth detecting and ranging) was, in itself, a method of extracting patterns from an extremely large database: the vastness of the sky. An echo from a radio pulse gave the accurate range and bearing of an aircraft.

To defeat enemy radar, the British intelligence genius R.V. Jones recounted in “Most Secret War,” it was necessary to insert information into the German radar system that resulted in gross ambiguity. For this, Jones turned to Joan Curran, a physicist at the Technical Research Establishment, who developed aluminum foil strips, called “window” by the Brits and “chaff” by the Americans, of an optimum size and shape to create thousands of reflections that overloaded and blinded the German radar system.

So how can present-day U.S. military and intelligence communities introduce a kind of “AI chaff” into generative AI systems, to deny access to new information about weapons and tactics?

One way would be to assign ambiguous names to those weapons and tactics. For example, such “naturally occurring” search terms might include “Flying Prostitute,” which would immediately reveal data about the B-26 Marauder medium-range bomber of World War II.

Or a search for “Gilda” and “Atoll,” which will retrieve a photo of the Mark III nuclear bomb that was dropped on Bikini Atoll in 1946, upon which was pasted a photo of Rita Hayworth.

A search of “Tonopah” and “Goatsucker” retrieves the F-117 stealth fighter.

Since a contemporary computer search is easily fooled by such accidental ambiguities, it would be possible to grossly skew results of a large language model function by deliberately using nomenclature that occurs with great frequency and is extremely ambiguous.

Given that a website like Pornhub gets something in excess of 115 million hits per day, perhaps the Next Generation Air Dominance fighter should be renamed “Stormy Daniels.” For code names of secret projects, try “Jenna Jameson” instead of “Rapid Dragon.”

Such an effort in sleight of hand would be useful for operations and communications security by confusing adversaries seeking open intelligence data.

For example, one can easily imagine the consternation that Chinese officers and NCOs would experience when their young soldiers expended valuable time meticulously examining every single image of Stormy Daniels to ensure that she was not the newest U.S. fighter plane.

Even “air-gapped” systems like the ones being used by U.S. intelligence agencies can be affected when the system updates information from internet sources.

Note that such an effort must actively and continuously pollute the datasets, like chaff confusing radar, by generating content that would populate the model and ensure that our adversaries consume it.

A more sophisticated approach would use keywords like “eBay” or “Amazon” or “Alibaba” as a predicate and then very common words such as “tire” or “bicycle” or “shoe.” Then contracting with a commercial media agency to do lots of promotion of the “items” across traditional and social media would tend to clog the system.

Use Conway’s law

Melvin Conway is an American computer scientist who in the 1960s conceived the eponymous rule that states: “Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.”

De Caro’s corollary says: “The more dogmatic the design team, the greater the opportunity to sabotage the whole design.”

Consider the Google Gemini fiasco. The February 2024 launch of Gemini, Google’s would-be answer to ChatGPT, was an unmitigated disaster that tanked Google’s share price and made the company a laughingstock. As the Gemini launch went forward, its image generator “hallucinated.” It created images of black Nazi stormtroopers and female Asian popes.

In retrospect, the event was the most egregious example of what happens when Conway’s law collides with organizational dogma. The young, woke, and historically ignorant programmers myopically led their company into a debacle.

But for those interested in confounding China’s AI systems, the Gemini disaster is an epiphany.

Xi’s need for speed, especially in 'informatization,' might be the bias that points to an exploitable weakness.

If the extremely well-paid, DEI-obsessed computer programmers at the Googleplex campus in Mountain View, California, can screw up so immensely, what kind of swirling vortex of programming snafu is being created by the highly regimented, ill-paid, constantly indoctrinated, young members of the People’s Liberation Army who work on AI?

A solution to beating China’s AI systems may be an epistemologist who specializes in the cultural communication of the PLA. By using de Caro’s Corollary, such an expert could lead a team of computer scientists to replicate the Chinese communication norms and find the weaknesses in their system — leaving it open to spoofing or outright collapse.

When a technology creates an existential threat, the individual developers of that technology become strategic targets. For example, in 1943, Operation Hydra, which employed the entirety of the RAF British Bomber Command — 596 bombers — had the stated mission of killing all the German rocket scientists at Peenemunde. The RAF had marginal success and was followed by three U.S. Eighth Air Force raids in July and August 1944.

In 1944, the Office of Strategic Services dispatched multilingual agent and polymath Moe Berg to assassinate German scientist Werner Heisenberg, if Heisenberg seemed to be on the right path to building an atomic bomb. Berg decided (correctly) that the German was off track. Letting him live actually kept the Nazis from success. In more recent times, it is no secret that five Iranian nuclear scientists have been assassinated (allegedly) by the Israelis in the last decade.

Advances in AI that could become existential threats could be dealt with in similar fashion. Bullets are cheap. So is C-4.

Exploit design biases to degrade AI systems

Often, the people and organizations funding research and development skew the results because of their bias. For example, Heisenberg was limited in the paths he might follow toward developing a Nazi atomic bomb because of Hitler’s perverse hatred of “Jewish physics.” This attitude was abetted by two prominent and anti-Semitic German scientists, Philipp Lenard and Johannes Stark, both Nobel Prize winners who reinforced the myth of “Aryan science.” The result effectively prevented a successful German nuclear program.

Returning to the Google Gemini disaster, one only needs to look at the attitude of Google leadership to see the roots of the debacle. Google CEO Sundar Pichai is a naturalized U.S. citizen whose undergraduate college education was in India before he came to the Unites States. His ties to India remain close, as he was awarded the Padma Bhushan, India’s third-highest civilian award, in 2022.

In congressional hearings in 2018, Pichai seemed to dance around giving direct answers to explicit questions, a trait he demonstrated again in 2020 and in an antitrust court case in 2023.

His internal memo after the 2024 Gemini disaster mentioned nothing about who selected the people in charge of the prompt engineering, who supervised those people, or who, if anyone, got fired in the aftermath. More importantly, Pichai made no mention of the internal communications functions that allowed the Gemini train wreck to occur in the first place.

Again, there is an epiphany here. Bias from the top affects outcomes.

As Xi Jinping continues his move toward autocratic authoritarian rule, he brings his own biases with him. This will eventually affect, or more precisely infect, Chinese military power.

In 2023, Xi detailed the need for China to meet world-class military standards by 2027, the 100th anniversary of the People’s Liberation Army. Xi also spoke of “informatization” (read: AI) to accelerate building “a strong system of strong strategic forces, raise the presence of combat forces in new domains and of new qualities, and promote combat-oriented military training.”

It seems that Xi’s need for speed, especially in “informatization,” might be the bias that points to an exploitable weakness.

Target chips with energy weapons

Artificial intelligence depends on extremely fast computer chips whose capacities are approaching their physical limits. They are more and more vulnerable to lack of cooling — and to an electromagnetic pulse.

In the case of large cloud-based data centers, cooling is essential. Water cooling is cheapest, but pumps and backup pumps are usually not hardened, nor are the inlet valves. No water, no cooling. No cooling, no cloud.

The same goes for primary and secondary electrical power. No power, no cloud. No generators, no cloud. No fuel, no cloud.

Obviously, without functioning chips, AI doesn’t work.

AI robots in the form of autonomous airborne drones, or ground mobile vehicles, are moving targets — small and hard to hit. But their chips are vulnerable to an electromagnetic pulse. We’ve learned in recent times that a lightning bolt with gigawatts of power isn’t the only way to knock out an AI robot. High-power microwave systems such as Epirus, Leonidas, and Thor can burn out AI systems at a range of about three miles.

Another interesting technology, not yet fielded, is the gyrotron, a Soviet-developed, high-power microwave source that is halfway between a klystron tube and a free electron laser. It creates a cyclotron resonance in a strong magnetic field that can produce a customized energy bolt with a specific pulse width and specific amplitude. It could therefore reach out and disable a specific kind of chip, in theory, at greater ranges than a “you fly ’em, we fry ’em” high-power microwave weapon, now in the early test stages.

Obviously, without functioning chips, AI doesn’t work.

The headlong Chinese AI development initiative could provide the PLA with an extraordinary military advantage in terms of the speed and sophistication of a future attack on the United States.

Thus, the need to develop AI countermeasures now is paramount.

So, Secretary Hegseth, one final idea for you to consider: During World War I, the great Italian progenitor of air power, General Giulio Douhet, very wisely observed: “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.”

In terms of the threat posed by artificial intelligence as it applies to warfare, Douhet’s words could not be truer today or easier to follow.

Editor’s note: A version of this article appeared originally on Blaze Media in August 2024.

AI Chatbots Are Programmed To Spew Democrat Gun Control Narratives

We asked AI chatbots about their thoughts on crime and gun control. As election day neared, their answers moved even further left.

Are college kids using AI to cheat? Return investigates what's happening on campus



“AI is coming for your job!”

That’s a sentiment shared by many across the world. As AI technology grows more advanced, many are worried about Gen Z’s future in the job market. However, AI already affects Gen Z’s workforce training, also known as college.

Surveys show that between 30% and 89% of college students have used ChatGPT on assignments at least once, which worries most college professors.

Nowadays, teens are watching hundreds of 30-second TikTok videos, scrolling aimlessly on X, and, worst of all, watching porn rather than consuming content that trains the brain to think critically.

“I have mixed thoughts about college students using AI and Chatbot for their assignments,” said Yao-Yu Chih, a Texas State University finance and economics professor. “I recognize the potential of ChatGPT to enhance learning by providing quick access to information, but I am concerned,” he added, “about the risk of academic dishonesty and students relying too heavily on AI which may hinder their true understanding of the subject matter.”

Justin Blessinger, director of the AdapT Lab at Madison Cyber Labs and a professor of English at Dakota State University, also spoke to Blaze News about his concerns with ChatGPT. “I hear from many professed experts that AI is "no different" than the internet was, or Google, or autocorrect, and will be "disruptive" in the same fashion, where the Luddites will eventually quiet down or die off and the enlightened apparatchiks will prevail,” he said.

“But it's not. Not remotely,” Blessinger added. “AI is not the internet. It absolutely replaces thinking for a great many students.”

During my first year at the University of Texas at Austin, many of my professors’ syllabi included an “academic dishonesty” section that prohibited the use of ChatGPT. For example, in my computer science coding course, the professor told students they would either have to drop the class, get an F, and/or be reported to the dean of students office if they used ChatGPT.

“[C]ode written by an automated system such as ChatGPT is not your own effort. Don't even think about turning in such work as your own, or even using it as a basis for your work. We have very sophisticated tools to find such cheating and we use them routinely,” the syllabus said.

Some college students are concerned too. Over half of college students consider using ChatGPT to be cheating. In a conversation with Blaze News, a second-year government major said, “I have never used artificial intelligence in college because I think that AI hinders academic creativity and growth.” He argued that AI may hinder students’ creative abilities, “stop them from thinking for themselves,” and “make them more inclined to copy and implement ChatGPT’s writing style and ideas for their own writing.”

In my experience, most students use AI moderately by checking over work they have already completed or by asking it to perform simple tasks, like “using it as a grammar checker on papers,” as a fourth-year kinesiology student told Blaze News. "English is not my first language, and using it professionally still proves to be a challenge for me sometimes,” he added.

A minority aren’t too concerned with AI abuse and use it extensively to bypass monotonous tasks. After all, most college English professors assign essays with prompts related to social justice, America’s racist history, or some other left-wing idea.

“I’ve been using [ChatGPT] ever since I heard about it during my senior year of high school,” said a second-year finance student in conversation with Blaze News. For some of his essays, he said he inputs prompts into ChatGPT and “takes whatever [ChatGPT] gives me and sends it through a paraphrasing tool website since it changes up the writing a little bit” to evade the professor’s AI checker.

Most college students, including myself, believe ChatGPT is useful for simple tasks or acting as a search engine but is incompetent at completing complicated homework problems like finding solutions for multivariable calculus or linear algebra assignments. But weirdly enough, ChatGPT is proficient in explaining complex math ideas conceptually despite being unable to actually produce the correct numerical solution.

A second-year computer science student told Blaze News he “uses AI quite a bit in my day to day college work.” He continued, saying, “I’ll use it to get ideas or help get rid of a writer’s block. For essays, it’s helpful to use ChatGPT to find synonyms and rewrite a few sentences to make my writing stronger. But I’ve never used it for math. It doesn’t seem too capable in my experience. I’ve tested it out for coding assignments a few times, but it doesn’t seem capable either.”

A potential hazard

PeopleImages/Getty

When defending their ChatGPT use on assignments, students often mention that they will encounter AI in their future workplaces, so they should be able to use it in their college work. They argue that teachers should embrace new technology and implement liberal ChatGPT policies.

However, over-reliance on ChatGPT may lead to a “potential hazard,” John Symons, professor of philosophy at the University of Kansas and founding director of the Center for Cyber-Social Dynamics, warned. Dr. Symons told Blaze News he “think[s] it's really important that people gain some acquaintance with the technology.” However, “I think,” Dr. Symons continued, “what would be most useful for young people is to understand the technology, not just be passive consumers of the device. So I think understanding the foundations of the technology, like how it works, is probably more valuable for their futures rather than being passive consumers of generative AI.”

Furthermore, increasing ChatGPT use by college students exacerbates their already incompetent reading and writing abilities. Reading closely and analyzing texts teaches students to form ideas and arguments, and writing allows students to slow down in their hectic lives and effectively communicate those ideas and arguments.

“The purpose of college writing has always been to teach students to analyze and think critically. You review what's been written about a topic; you form an opinion of your own; you express that opinion while gesturing toward the best evidence you discovered. You make changes based on what you know or assume about your audience,” Dr. Blessinger told Blaze News. But “using AI writing without first learning to research, argue, and write without [ChatGPT],” he warned, “is lunacy.”

Nowadays, teens are watching hundreds of 30-second TikTok videos, scrolling aimlessly on X, and, worst of all, watching porn rather than consuming content that trains the brain to think critically. It’s much easier to watch a five-minute PragerU video or two-sentence tweet explaining what it means to be a conservative rather than spend a couple of hours reading Russell Kirk’s "The Conservative Mind." It is no surprise that students don’t know how to read and write anymore.

In conversation with Blaze News, Jonathan Askonas, assistant professor of politics at the Catholic University of America, argues that “high school students have been basically post-literate for at least the last five years.”

“I don't think [ChatGPT’s] primary effect so far has not necessarily been to damage student's ability to think, read and write, as much as it has acted as a crutch for students who already struggled that were already poorly prepared for college. And then inevitably it also prevents them from growing, or it damages their ability to grow in those areas,” Askonas said. He also added that since students’ reading and writing skills are waning, “the effects [of AI] so far have been an improvement in students’ work.”

New Education Models

Teachers and professors will have to adapt to new technological developments. If teachers begin to design more personalized assignments, as opposed to a “one-size-fits-all” education model, students who use ChatGPT as a crutch may be forced to grow in their literacy. Dr. Symons told Blaze News:

I think the model for education is going to have to change. We're gonna have to move away from an industrial model of education towards a much more artisanal, personalized model of education where AI can certainly help, but the focus will be on discussion, oral exams, in-class writing assignments and close reading ... What happens in the classroom will have to be much more focused on students on individual skills, and the quality of the reading or the quality of reading skills will have to be the focus ... I think students will recognize the difference between that kind of personalized or artisanal education and the kind of mass produced industrial education that they might get through an online course or through a large lecture.

But in my experience, classes are increasingly mass-produced and offered online, likely because of long-COVID laziness. In high school, I took a combination of in-person and online courses so I could go home and eat lunch after my midday basketball practice, even though the same online courses were offered in person by better teachers. Some teachers showed videos they recorded during COVID while others just left students to learn from an e-textbook. During my first year of college, I took two online courses to make room for internships and extracurriculars. Each had around a thousand students, and one of them showed pre-recorded lectures from a couple of years ago.

However, once professors decide to shift away from mass-produced education, expectations will begin to change, and workplaces will rethink their view of what’s valuable. While some believe humanities degrees and jobs, like journalism, may become obsolete and useless because of AI, Dr. Askonas argues that the humanities might become more “scarce and therefore valuable” due to AI.

[AI] changes what we expect of our students. It changes where they're weak, and hopefully it changes what [professors] think that they need. So for instance, many college curriculums assume essentially illiterate college students. It’s not because of AI ... So that means thinking about how you are going to teach attention. How do you teach careful reading? How do you teach? How do you teach students to be self conscious about the effects that technology has on their own abilities? It's going to change what's valuable, right. So, instead of students being expected to be able to use generative AI in their workplace as it changes, you have this question of what remains scarce and therefore valuable. A certain level of rhetorical skill will remain valuable, the ability to prompt an AI in sophisticated ways, and using one's knowledge of rhetoric, history and subject matter will be even more valuable ... This is actually more beneficial for the humanities compared to people who just want to code. But even within the world of coding, right, I think that we're going to find that the irreplaceable level of sophistication of systems thinking and fundamental thinking in programming that will still remain very human, and it will be replaced as sort of the kind of code monkey, just you know, turning out code stuff.

Why are we so afraid of AI if we’ve been using it for years?



Geoffrey Hinton made headlines fortelling the BBC that artificial intelligence is an “extinction-level threat” to humanity. Hinton is no alarmist — he’s popularly dubbed the "godfather of AI" for creating the neural network technology that makes artificial intelligence possible. If anyone has authority to speak on the subject, it's him — and the world took notice when he did.

In May of 2023,Hinton quit his decade-long career at Google to speak openly about what he believes are the existential dangers AI poses to us "inferior" carbon intelligences. Moreover, ChatGPT’s debut in November of 2022, just half a year earlier, had already sparked a global reaction of equal fascination and trepidation to what felt like our first encounter with an elusive technology that had now welcomed itself into our lives, whether we were ready for it or not.

AI conjures up predictions of an Orwellian-like digital dystopia, one in which several oligarchs and AI overlords subject the masses to a totalitarian-like enslavement. There have been many calls for regulation over AI’s development to mitigate this risk, but to what extent would it be effective?

Ironically, artificial intelligence was not elusive at all before November 2022; it had embedded itself into our lives long before ChatGPT made it en vogue. People were already unknowingly using AI whenever they opened their smartphone with facial recognition, edited a paper with Grammarly, or chatted with Siri, Alexa, or another digital assistant. Apple or Google Maps are constantly learning your daily routines through AI to predict your movements and improve your daily commute. Every time someone clicks on a webpage with an ad, AI learns more about his or her behaviors and preferences, which is information that is sold to third-party ad agencies. We’ve been engaging with AI for years and haven’t batted an eye until now.

ChatGPT’s debut has become the impetus for the sudden global concern about AI. What is so distinct about this chatbot as opposed to other iterations of AI we have been engaging with for years that has inspired this newfound fascination and concern? Perhaps ChatGPT reveals what has been hiding silently in our daily encounters with AI: its potential or, as many would argue, its inevitability to surpass human intelligence.

Prior to ChatGPT, our interactions with artificial intelligence were limited to "narrow AI," also known as “artificial narrow intelligence” (ANI), which is a program restricted to a single, particular purpose. Facial recognition doesn't have another purpose or capacity beyond its single task. The same applies to Apple Maps, Google's search algorithm, and other forms of commonplace artificial intelligence.

ChatGPT gave the world its first glimpse into artificial general intelligence (AGI), AI that can seemingly take on a mind of its own.The objective behind AGI is to create machines that can reason and think with human-like capacity — and then surpass that capacity.

Though chatbots similar to ChatGPT technically fall under the ANI umbrella, ChatGPT’s human-like, thoughtful responses, coupled with its superhuman capacity for speed and accuracy, are laying the foundation for AGI’s emergence.

Reputable scientists with diverse personal and political views are divided over AGI’s limits.

For example, the pioneering web developer Marc Andreessen says that AI cannot go beyond the goals that it is programmed with:

[AI] is math—code—computers built by people, owned by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious hand wave.

Conversely,Lord Rees, the former U.K. Astronomer Royal and a former president of the Royal Society, believes that humans will be a mere speck on evolutionary history, which will, he predicts, be dominated by a post-human era facilitated by AGI’s debut:

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity—spanning tens of millennia at most—will be a brief precursor to the more powerful intellect of the inorganic, post-human era. So in the far future, it won’t be the minds of humans but those of machines that will most fully understand the cosmos.

Elon Musk and a group of the world’s leading AI expertspublished an open letter calling for an immediate pause on AI development, anticipating Lord Rees’ predictions rather than Andreessen’s. Musk didn’t wait long to ignore his own call to action with the debut of X’s new chatbot Grok, which has similar capabilities to ChatGPT, along with Google’s Gemini and Microsoft’s new AI chatbot integrated with Bing’s search engine.

Ray Kurzweil, trans-humanist futurist and Google’s head of development, famously predicted in 2005 that we would reach singularity by 2045, the point when AI technology would surpass human intelligence, forcing us to decide whether to integrate with it or be naturally selected out of evolution’s trajectory.

Was he correct?

The proof of these varying predictions will be in the pudding, which is being concocted in our current cultural moment. However, ChatGPT has brought timeless ethical questions in new clothing to the forefront of widespread debate. What does it mean to be human, and, asGlenn Beck poignantly asked in an op-ed, will AI rebel against its creator like we rebelled against ours? The fact that we are asking these questions on a popular scale is indicative that we are now in a new era of technology, one that strikes at deeply philosophical questions whose answers will set the tone for not only how we understand the nature of AI but moreover, how we grapple with our own nature.

Living life without fear

How, then, should we mitigate the risk of our worst fears surrounding AI becoming a reality? Will we, its current master, inevitably become its slave?

The latter fear often conjures up predictions of an Orwellian-like digital dystopia, one in which several oligarchs and AI overlords subject the masses to a totalitarian-like enslavement. There have been many calls for regulation over AI’s development to mitigate this risk, but to what extent would it be effective? The government will hold all the reins to AI’s power if directed toward private companies. If directed toward the government, tech moguls can just as easily become oligarchs as their rivals in the government. In either scenario, those at risk of AI’s enslavement have very little power to control their fate.

However, one can argue that we have already dipped our toes into a Huxleyan-like enslavement, in which we have traded seemingly menial yet deeply human acts for the convenience technology serves on a digital platter. An Orwellian-like AI takeover won’t happen overnight. It will begin with surrendering the creative act of writing for an immediately generated paper “written” by an AI chatbot. It will progress when we forego the difficulty of forging meaningful human relationships with AI “partners” that will always be there for you, never challenge you, and constantly affirm you. An Orwellian future isn’t so unimaginable if we have already surrendered our freedom to AI on our own accord.

Avoiding this Huxleyan-type of enslavement — the enslavement to AI’s convenience — requires falling deeply in love with being human. We may not be in charge of regulating the public and private roles in AI’s development, but we are responsible for determining its role in our daily lives. This is our most potent means of keeping AI in check: by choosing to labor in creativity, enduring the inconveniences and hardships of forging human relationships, and desiring things that ought to be worked for outside our immediate grasp. In short, we must work on being human and delighting in the fulfillment that emerges from this labor. Convenience is the gateway to voluntary enslavement. Our humanity is the cost of such a transaction and the anecdote.

Elon Musk gives ultimatum to OpenAI's new partner after withdrawing lawsuit



South African billionaire Elon Musk has withdrawn his lawsuit against the artificial intelligence organization OpenAI, the company that produced the powerful multimodal large language model GPT-4 last year. He has not, however, given up his crusade, threatening to ban devices belonging to OpenAI's new partner at his companies on account of alleged security threats.

The lawsuit

In February, Musk sued OpenAI and cofounders Sam Altman and Greg Brockman for breach of contract, breach of fiduciary duty, and unfair business practices.

Musk's complaint centered on the suggestion that OpenAI, which he cofounded, set its founding agreement "aflame."

According to the lawsuit, the agreement was that OpenAI "would be a non-profit developing [artificial general intelligence] for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons."

Furthermore, the company would "compete with, and serve as a vital counterbalance to, Google/DeepMind in the face for AGI, but would do so to benefit humanity."

"OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft," said the lawsuit. "Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft."

The suit, filed several months after the launch of Musk's AI company xAI, further alleged that GPT-4 "is now a de facto Microsoft proprietary algorithm," despite being outside the scope of Microsoft's September 2020 exclusive license with OpenAI.

OpenAI, which underwent a botched coup last year, refuted Musk's framing in a March blog post, stating, "In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission — billions of dollars per year, which was far more than any of us, especially Elon, thought we'd be able to raise as the non-profit."

The post alleged that Musk "decided the next step for the mission was to create a for-profit entity" in 2017, and gunned for majority equity, initial board control, and to be CEO. Musk allegedly later suggested that they merge OpenAI into Tesla.

OpenAI's attorneys suggested that the lawsuit amounted to an effort on Musk's part to trip up a competitor and advance his own interests in the AI space, reported Reuters.

"Seeing the remarkable technological advances OpenAI has achieved, Musk now wants that success for himself," said the OpenAI attorneys.

After months of criticizing OpenAI, Musk moved to withdraw the lawsuit without prejudice Tuesday, without providing a reason why.

A San Francisco Superior Court judge was reportedly prepared to hear OpenAI's bid to drop the suit at a hearing scheduled the following day.

The threat

The day before Musk spiked his lawsuit, OpenAI announced that Apple is "integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT's capabilities — including image and document understanding — without needing to jump between tools."

As a result of this partnership, Siri and Writing Tools would be able to rely upon ChatGPT's intelligence.

According to OpenAI, requests in the ChatGPT-interfaced Apple programs would not be stored in OpenAI and users' IP addresses would be obscured.

Musk responded Monday on X, "If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation."

"And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage," wrote Musk.

Musk added, "Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river."

The response to Musk's threat was mixed, with some critics suggesting that the integration was not actually occurring at the operating system level.

Others, however, lauded Musk's stance.

Sen. Mike Lee (R-Utah), for instance, noted that the "world needs open-source AI. OpenAI started with that objective in mind, but has strayed far from it, and is now better described as 'ClosedAI.'"

"I commend @elonmusk for his advocacy in this area," continued Lee. "Unless Elon succeeds, I fear we'll see the emergence of a cartelized AI industry—one benefitting a few large, entrenched market incumbents, but harming everyone else."

The whistleblowers

Musk is not the only one with ties to OpenAI concerned about the course it has charted. Earlier this month, a group of OpenAI insiders spoke out about troubling trends at the company.

The insiders echoed some of the themes in Musk's lawsuit, telling the New York Times that profits have been assigned top priority at the same time that workers' concerns have been suppressed.

"OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there," said Daniel Kokotajlo, a former OpenAI governance division researcher.

Kokotajlo reckons this is not a process that can be raced, having indicated the probability of AI destroying or doing catastrophic damage to mankind is 70%.

Shortly after allegedly advising Altman that OpenAi should "pivot to safety," Kokotajlo, having seen no meaningful change, quit, citing a loss of "confidence that OpenAI will behave responsibly," reported the Times.

Kokotajlo was one of a baker's dozen of current and past OpenAI employees who signed an open letter stressing:

AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this. AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

The insiders noted that the problem is compounded by corporate obstacles to employees voicing concerns.

OpenAI spokeswoman Lindsey Held said of the letter, "We're proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we'll continue to engage with governments, civil society and other communities around the world."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

OpenAI unveils an even more powerful AI, but is it 'alive'?



In the 2013 film "Her," Joaquin Phoenix plays a shy computer nerd who falls in love with an AI he speaks to through a pair of white wireless earbuds. A little over a decade after the film’s release, it’s no longer science fiction. AirPods are old news, and with the imminent full rollout of OpenAI’s GPT-4o, such AI will be a reality (the “o” is for “omni"). In fact, OpenAI head honcho Sam Altman simply tweeted after the announcement: “her.”

GPT-4o can carry on a full conversion with you. In the coming weeks, it will be able to see and interpret the environment around it. Unlike previous iterations of GPT that were flat and emotionless, GPT-4o has personality and even opinions. It pauses and stutters like a person, and it’s even a little flirty. Here’s a video of GPT-4o critiquing a man’s outfit for a job interview:

Interview Prep with GPT-4owww.youtube.com

In fact, no human involvement is required. Two instances of GPT-4o can carry on an entire conversation without human involvement.

Soon, humans may not be required for many jobs. Here’s a video of GPT-4o handling a simulated customer service call. Currently, nearly 3 million Americans work in customer service, and chances are they’ll need a new job within a couple of years.

Two GPT-4os interacting and singingwww.youtube.com

GPT-4o is an impressive technology that was mere science fiction at the start of the decade, but its also comes with some harrowing implications. First, let’s clear up some confusion about the components of GPT-4o and what’s currently available.

Clearing up confusion about what GPT-4o is

OpenAI announced several things at once, but they’re not all rolling out at the same time.

GPT-4o will eventually be available to all ChatGPT users, but currently, the text-based version is only available for ChatGPT Plus subscribers who pay $20 per month. It can be used on the web or in the iPhone app. Compared to GPT-4, GPT-4o is much faster and just a little smarter. Web searches are much faster and more reliable, and GPT is better about listing its sources than it was with GPT-4.

However, the new text and voice models are not yet available to anyone except developers interacting with the GPT API. If you subscribe to ChatGPT Plus, you can use Voice Mode with the 4o engine, but it will still be using the old voice model without image recognition and the new touches.

Additionally, OpenAI is rolling out a new desktop app for the Mac, which will let you bring up ChatGPT with a keyboard shortcut and feed it screenshots for analysis. It will eventually be free to all, but right now it’s only available to select ChatGPT Plus subscribers.

ChatGPT macOS app... reminds me of Windows Copilotwww.youtube.com

Finally, you may watch these demo videos and wonder why the voice assistant on your phone is still so, so dumb. There are strong rumors indicating that Apple is working on a deal to license the GPT tech from OpenAI for its next-generation Siri, likely as a stopgap while Apple develops its own AI tech.

Is GPT-4o AGI?

The hot topic in the AI world is AGI, short for artificial general intelligence. In short, it’s an AI indistinguishable from interacting with a human being.

I asked GPT-4o for the defining characteristics of an AGI, and it presented the following:

  1. Generalization: The ability to apply learned knowledge to new and varied situations.
  2. Adaptability: The capacity to learn from experience and improve over time.
  3. Understanding and reasoning: The capability to comprehend complex concepts and reason logically.
  4. Self-awareness: Some definitions of AGI include an element of self-awareness, where the AI understands its own existence and goals.

Is GPT-4o an AGI? AI developer Benjamin De Kraker called it “essentially AGI,” while NVIDIA’s Jim Fan, who was also an early OpenAI intern, was much more reserved.

I decided to go directly to the source and asked GPT-4o if it’s an AGI. It predictably rejected the notion. “I don't possess general intelligence, self-awareness, or the ability to learn and adapt autonomously beyond my training data. My responses are based on patterns and information from the data I was trained on, rather than any understanding or reasoning ability akin to human intelligence,” GPT-4o said.

But doesn’t that also describe many, if not most, people? How many of us go through life parroting things we heard without applying additional understanding or reasoning? I suspect De Kraker is right: To the average person, the full version of GPT-4o will be AGI. If OpenAI’s demo videos are an accurate example of its actual capabilities, and they likely are, then GPT-4o successfully emulates the first four tenets of AGI: generalization, adaptability, understanding, and reasoning. It can view and understand its surroundings, can give opinions, and it constantly learns new information from crawling the web or user input.

At least, it will be convincing enough for what we in the business world call “decision makers.” It’ll be convincing enough to replace human beings in many customer-facing roles. And for many lonely people, they will undoubtedly form emotional bonds with the flirty AI, which Sam Altman is fully aware of.

Mysterious happenings at OpenAI

We would be remiss not to discuss some mysterious high-level departures from OpenAI following the GPT-4o announcement. Ilya Sutskever, chief scientist and co-founder, quit immediately after, soon followed by Jan Leike, who helped run OpenAI’s “superalignment” group that seeks to ensure that the AI is aligned with human interests. This follows many other resignations from OpenAI in the past few weeks.

Sutskever led an attempted coup against Altman last year, successfully deposing him as CEO for about a week before he was reinstated as CEO. Sutskever can best be described as a “safetyist” who is deeply concerned about the implications of an AGI, so his sudden resignation following the GPT-4o announcement has sparked a flurry of online speculation about whether OpenAI has achieved AGI or if he realized that it’s impossible, because it would be strange to leave the company if it were on the verge of AGI.

From his statement, it seems that Sutskever doesn’t believe OpenAI has achieved AGI and that he’s moving on to greener pastures — ”a project that is very personally meaningful to me.” Given OpenAI’s rapid trajectory with him as chief scientist, he can certainly write his own ticket now.