AI Chatbots Are Programmed To Spew Democrat Gun Control Narratives

We asked AI chatbots about their thoughts on crime and gun control. As election day neared, their answers moved even further left.

Are college kids using AI to cheat? Return investigates what's happening on campus



“AI is coming for your job!”

That’s a sentiment shared by many across the world. As AI technology grows more advanced, many are worried about Gen Z’s future in the job market. However, AI already affects Gen Z’s workforce training, also known as college.

Surveys show that between 30% and 89% of college students have used ChatGPT on assignments at least once, which worries most college professors.

Nowadays, teens are watching hundreds of 30-second TikTok videos, scrolling aimlessly on X, and, worst of all, watching porn rather than consuming content that trains the brain to think critically.

“I have mixed thoughts about college students using AI and Chatbot for their assignments,” said Yao-Yu Chih, a Texas State University finance and economics professor. “I recognize the potential of ChatGPT to enhance learning by providing quick access to information, but I am concerned,” he added, “about the risk of academic dishonesty and students relying too heavily on AI which may hinder their true understanding of the subject matter.”

Justin Blessinger, director of the AdapT Lab at Madison Cyber Labs and a professor of English at Dakota State University, also spoke to Blaze News about his concerns with ChatGPT. “I hear from many professed experts that AI is "no different" than the internet was, or Google, or autocorrect, and will be "disruptive" in the same fashion, where the Luddites will eventually quiet down or die off and the enlightened apparatchiks will prevail,” he said.

“But it's not. Not remotely,” Blessinger added. “AI is not the internet. It absolutely replaces thinking for a great many students.”

During my first year at the University of Texas at Austin, many of my professors’ syllabi included an “academic dishonesty” section that prohibited the use of ChatGPT. For example, in my computer science coding course, the professor told students they would either have to drop the class, get an F, and/or be reported to the dean of students office if they used ChatGPT.

“[C]ode written by an automated system such as ChatGPT is not your own effort. Don't even think about turning in such work as your own, or even using it as a basis for your work. We have very sophisticated tools to find such cheating and we use them routinely,” the syllabus said.

Some college students are concerned too. Over half of college students consider using ChatGPT to be cheating. In a conversation with Blaze News, a second-year government major said, “I have never used artificial intelligence in college because I think that AI hinders academic creativity and growth.” He argued that AI may hinder students’ creative abilities, “stop them from thinking for themselves,” and “make them more inclined to copy and implement ChatGPT’s writing style and ideas for their own writing.”

In my experience, most students use AI moderately by checking over work they have already completed or by asking it to perform simple tasks, like “using it as a grammar checker on papers,” as a fourth-year kinesiology student told Blaze News. "English is not my first language, and using it professionally still proves to be a challenge for me sometimes,” he added.

A minority aren’t too concerned with AI abuse and use it extensively to bypass monotonous tasks. After all, most college English professors assign essays with prompts related to social justice, America’s racist history, or some other left-wing idea.

“I’ve been using [ChatGPT] ever since I heard about it during my senior year of high school,” said a second-year finance student in conversation with Blaze News. For some of his essays, he said he inputs prompts into ChatGPT and “takes whatever [ChatGPT] gives me and sends it through a paraphrasing tool website since it changes up the writing a little bit” to evade the professor’s AI checker.

Most college students, including myself, believe ChatGPT is useful for simple tasks or acting as a search engine but is incompetent at completing complicated homework problems like finding solutions for multivariable calculus or linear algebra assignments. But weirdly enough, ChatGPT is proficient in explaining complex math ideas conceptually despite being unable to actually produce the correct numerical solution.

A second-year computer science student told Blaze News he “uses AI quite a bit in my day to day college work.” He continued, saying, “I’ll use it to get ideas or help get rid of a writer’s block. For essays, it’s helpful to use ChatGPT to find synonyms and rewrite a few sentences to make my writing stronger. But I’ve never used it for math. It doesn’t seem too capable in my experience. I’ve tested it out for coding assignments a few times, but it doesn’t seem capable either.”

A potential hazard

PeopleImages/Getty

When defending their ChatGPT use on assignments, students often mention that they will encounter AI in their future workplaces, so they should be able to use it in their college work. They argue that teachers should embrace new technology and implement liberal ChatGPT policies.

However, over-reliance on ChatGPT may lead to a “potential hazard,” John Symons, professor of philosophy at the University of Kansas and founding director of the Center for Cyber-Social Dynamics, warned. Dr. Symons told Blaze News he “think[s] it's really important that people gain some acquaintance with the technology.” However, “I think,” Dr. Symons continued, “what would be most useful for young people is to understand the technology, not just be passive consumers of the device. So I think understanding the foundations of the technology, like how it works, is probably more valuable for their futures rather than being passive consumers of generative AI.”

Furthermore, increasing ChatGPT use by college students exacerbates their already incompetent reading and writing abilities. Reading closely and analyzing texts teaches students to form ideas and arguments, and writing allows students to slow down in their hectic lives and effectively communicate those ideas and arguments.

“The purpose of college writing has always been to teach students to analyze and think critically. You review what's been written about a topic; you form an opinion of your own; you express that opinion while gesturing toward the best evidence you discovered. You make changes based on what you know or assume about your audience,” Dr. Blessinger told Blaze News. But “using AI writing without first learning to research, argue, and write without [ChatGPT],” he warned, “is lunacy.”

Nowadays, teens are watching hundreds of 30-second TikTok videos, scrolling aimlessly on X, and, worst of all, watching porn rather than consuming content that trains the brain to think critically. It’s much easier to watch a five-minute PragerU video or two-sentence tweet explaining what it means to be a conservative rather than spend a couple of hours reading Russell Kirk’s "The Conservative Mind." It is no surprise that students don’t know how to read and write anymore.

In conversation with Blaze News, Jonathan Askonas, assistant professor of politics at the Catholic University of America, argues that “high school students have been basically post-literate for at least the last five years.”

“I don't think [ChatGPT’s] primary effect so far has not necessarily been to damage student's ability to think, read and write, as much as it has acted as a crutch for students who already struggled that were already poorly prepared for college. And then inevitably it also prevents them from growing, or it damages their ability to grow in those areas,” Askonas said. He also added that since students’ reading and writing skills are waning, “the effects [of AI] so far have been an improvement in students’ work.”

New Education Models

Teachers and professors will have to adapt to new technological developments. If teachers begin to design more personalized assignments, as opposed to a “one-size-fits-all” education model, students who use ChatGPT as a crutch may be forced to grow in their literacy. Dr. Symons told Blaze News:

I think the model for education is going to have to change. We're gonna have to move away from an industrial model of education towards a much more artisanal, personalized model of education where AI can certainly help, but the focus will be on discussion, oral exams, in-class writing assignments and close reading ... What happens in the classroom will have to be much more focused on students on individual skills, and the quality of the reading or the quality of reading skills will have to be the focus ... I think students will recognize the difference between that kind of personalized or artisanal education and the kind of mass produced industrial education that they might get through an online course or through a large lecture.

But in my experience, classes are increasingly mass-produced and offered online, likely because of long-COVID laziness. In high school, I took a combination of in-person and online courses so I could go home and eat lunch after my midday basketball practice, even though the same online courses were offered in person by better teachers. Some teachers showed videos they recorded during COVID while others just left students to learn from an e-textbook. During my first year of college, I took two online courses to make room for internships and extracurriculars. Each had around a thousand students, and one of them showed pre-recorded lectures from a couple of years ago.

However, once professors decide to shift away from mass-produced education, expectations will begin to change, and workplaces will rethink their view of what’s valuable. While some believe humanities degrees and jobs, like journalism, may become obsolete and useless because of AI, Dr. Askonas argues that the humanities might become more “scarce and therefore valuable” due to AI.

[AI] changes what we expect of our students. It changes where they're weak, and hopefully it changes what [professors] think that they need. So for instance, many college curriculums assume essentially illiterate college students. It’s not because of AI ... So that means thinking about how you are going to teach attention. How do you teach careful reading? How do you teach? How do you teach students to be self conscious about the effects that technology has on their own abilities? It's going to change what's valuable, right. So, instead of students being expected to be able to use generative AI in their workplace as it changes, you have this question of what remains scarce and therefore valuable. A certain level of rhetorical skill will remain valuable, the ability to prompt an AI in sophisticated ways, and using one's knowledge of rhetoric, history and subject matter will be even more valuable ... This is actually more beneficial for the humanities compared to people who just want to code. But even within the world of coding, right, I think that we're going to find that the irreplaceable level of sophistication of systems thinking and fundamental thinking in programming that will still remain very human, and it will be replaced as sort of the kind of code monkey, just you know, turning out code stuff.

Why are we so afraid of AI if we’ve been using it for years?



Geoffrey Hinton made headlines fortelling the BBC that artificial intelligence is an “extinction-level threat” to humanity. Hinton is no alarmist — he’s popularly dubbed the "godfather of AI" for creating the neural network technology that makes artificial intelligence possible. If anyone has authority to speak on the subject, it's him — and the world took notice when he did.

In May of 2023,Hinton quit his decade-long career at Google to speak openly about what he believes are the existential dangers AI poses to us "inferior" carbon intelligences. Moreover, ChatGPT’s debut in November of 2022, just half a year earlier, had already sparked a global reaction of equal fascination and trepidation to what felt like our first encounter with an elusive technology that had now welcomed itself into our lives, whether we were ready for it or not.

AI conjures up predictions of an Orwellian-like digital dystopia, one in which several oligarchs and AI overlords subject the masses to a totalitarian-like enslavement. There have been many calls for regulation over AI’s development to mitigate this risk, but to what extent would it be effective?

Ironically, artificial intelligence was not elusive at all before November 2022; it had embedded itself into our lives long before ChatGPT made it en vogue. People were already unknowingly using AI whenever they opened their smartphone with facial recognition, edited a paper with Grammarly, or chatted with Siri, Alexa, or another digital assistant. Apple or Google Maps are constantly learning your daily routines through AI to predict your movements and improve your daily commute. Every time someone clicks on a webpage with an ad, AI learns more about his or her behaviors and preferences, which is information that is sold to third-party ad agencies. We’ve been engaging with AI for years and haven’t batted an eye until now.

ChatGPT’s debut has become the impetus for the sudden global concern about AI. What is so distinct about this chatbot as opposed to other iterations of AI we have been engaging with for years that has inspired this newfound fascination and concern? Perhaps ChatGPT reveals what has been hiding silently in our daily encounters with AI: its potential or, as many would argue, its inevitability to surpass human intelligence.

Prior to ChatGPT, our interactions with artificial intelligence were limited to "narrow AI," also known as “artificial narrow intelligence” (ANI), which is a program restricted to a single, particular purpose. Facial recognition doesn't have another purpose or capacity beyond its single task. The same applies to Apple Maps, Google's search algorithm, and other forms of commonplace artificial intelligence.

ChatGPT gave the world its first glimpse into artificial general intelligence (AGI), AI that can seemingly take on a mind of its own.The objective behind AGI is to create machines that can reason and think with human-like capacity — and then surpass that capacity.

Though chatbots similar to ChatGPT technically fall under the ANI umbrella, ChatGPT’s human-like, thoughtful responses, coupled with its superhuman capacity for speed and accuracy, are laying the foundation for AGI’s emergence.

Reputable scientists with diverse personal and political views are divided over AGI’s limits.

For example, the pioneering web developer Marc Andreessen says that AI cannot go beyond the goals that it is programmed with:

[AI] is math—code—computers built by people, owned by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious hand wave.

Conversely,Lord Rees, the former U.K. Astronomer Royal and a former president of the Royal Society, believes that humans will be a mere speck on evolutionary history, which will, he predicts, be dominated by a post-human era facilitated by AGI’s debut:

Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity—spanning tens of millennia at most—will be a brief precursor to the more powerful intellect of the inorganic, post-human era. So in the far future, it won’t be the minds of humans but those of machines that will most fully understand the cosmos.

Elon Musk and a group of the world’s leading AI expertspublished an open letter calling for an immediate pause on AI development, anticipating Lord Rees’ predictions rather than Andreessen’s. Musk didn’t wait long to ignore his own call to action with the debut of X’s new chatbot Grok, which has similar capabilities to ChatGPT, along with Google’s Gemini and Microsoft’s new AI chatbot integrated with Bing’s search engine.

Ray Kurzweil, trans-humanist futurist and Google’s head of development, famously predicted in 2005 that we would reach singularity by 2045, the point when AI technology would surpass human intelligence, forcing us to decide whether to integrate with it or be naturally selected out of evolution’s trajectory.

Was he correct?

The proof of these varying predictions will be in the pudding, which is being concocted in our current cultural moment. However, ChatGPT has brought timeless ethical questions in new clothing to the forefront of widespread debate. What does it mean to be human, and, asGlenn Beck poignantly asked in an op-ed, will AI rebel against its creator like we rebelled against ours? The fact that we are asking these questions on a popular scale is indicative that we are now in a new era of technology, one that strikes at deeply philosophical questions whose answers will set the tone for not only how we understand the nature of AI but moreover, how we grapple with our own nature.

Living life without fear

How, then, should we mitigate the risk of our worst fears surrounding AI becoming a reality? Will we, its current master, inevitably become its slave?

The latter fear often conjures up predictions of an Orwellian-like digital dystopia, one in which several oligarchs and AI overlords subject the masses to a totalitarian-like enslavement. There have been many calls for regulation over AI’s development to mitigate this risk, but to what extent would it be effective? The government will hold all the reins to AI’s power if directed toward private companies. If directed toward the government, tech moguls can just as easily become oligarchs as their rivals in the government. In either scenario, those at risk of AI’s enslavement have very little power to control their fate.

However, one can argue that we have already dipped our toes into a Huxleyan-like enslavement, in which we have traded seemingly menial yet deeply human acts for the convenience technology serves on a digital platter. An Orwellian-like AI takeover won’t happen overnight. It will begin with surrendering the creative act of writing for an immediately generated paper “written” by an AI chatbot. It will progress when we forego the difficulty of forging meaningful human relationships with AI “partners” that will always be there for you, never challenge you, and constantly affirm you. An Orwellian future isn’t so unimaginable if we have already surrendered our freedom to AI on our own accord.

Avoiding this Huxleyan-type of enslavement — the enslavement to AI’s convenience — requires falling deeply in love with being human. We may not be in charge of regulating the public and private roles in AI’s development, but we are responsible for determining its role in our daily lives. This is our most potent means of keeping AI in check: by choosing to labor in creativity, enduring the inconveniences and hardships of forging human relationships, and desiring things that ought to be worked for outside our immediate grasp. In short, we must work on being human and delighting in the fulfillment that emerges from this labor. Convenience is the gateway to voluntary enslavement. Our humanity is the cost of such a transaction and the anecdote.

Elon Musk gives ultimatum to OpenAI's new partner after withdrawing lawsuit



South African billionaire Elon Musk has withdrawn his lawsuit against the artificial intelligence organization OpenAI, the company that produced the powerful multimodal large language model GPT-4 last year. He has not, however, given up his crusade, threatening to ban devices belonging to OpenAI's new partner at his companies on account of alleged security threats.

The lawsuit

In February, Musk sued OpenAI and cofounders Sam Altman and Greg Brockman for breach of contract, breach of fiduciary duty, and unfair business practices.

Musk's complaint centered on the suggestion that OpenAI, which he cofounded, set its founding agreement "aflame."

According to the lawsuit, the agreement was that OpenAI "would be a non-profit developing [artificial general intelligence] for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons."

Furthermore, the company would "compete with, and serve as a vital counterbalance to, Google/DeepMind in the face for AGI, but would do so to benefit humanity."

"OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft," said the lawsuit. "Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft."

The suit, filed several months after the launch of Musk's AI company xAI, further alleged that GPT-4 "is now a de facto Microsoft proprietary algorithm," despite being outside the scope of Microsoft's September 2020 exclusive license with OpenAI.

OpenAI, which underwent a botched coup last year, refuted Musk's framing in a March blog post, stating, "In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission — billions of dollars per year, which was far more than any of us, especially Elon, thought we'd be able to raise as the non-profit."

The post alleged that Musk "decided the next step for the mission was to create a for-profit entity" in 2017, and gunned for majority equity, initial board control, and to be CEO. Musk allegedly later suggested that they merge OpenAI into Tesla.

OpenAI's attorneys suggested that the lawsuit amounted to an effort on Musk's part to trip up a competitor and advance his own interests in the AI space, reported Reuters.

"Seeing the remarkable technological advances OpenAI has achieved, Musk now wants that success for himself," said the OpenAI attorneys.

After months of criticizing OpenAI, Musk moved to withdraw the lawsuit without prejudice Tuesday, without providing a reason why.

A San Francisco Superior Court judge was reportedly prepared to hear OpenAI's bid to drop the suit at a hearing scheduled the following day.

The threat

The day before Musk spiked his lawsuit, OpenAI announced that Apple is "integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT's capabilities — including image and document understanding — without needing to jump between tools."

As a result of this partnership, Siri and Writing Tools would be able to rely upon ChatGPT's intelligence.

According to OpenAI, requests in the ChatGPT-interfaced Apple programs would not be stored in OpenAI and users' IP addresses would be obscured.

Musk responded Monday on X, "If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation."

"And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage," wrote Musk.

Musk added, "Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river."

The response to Musk's threat was mixed, with some critics suggesting that the integration was not actually occurring at the operating system level.

Others, however, lauded Musk's stance.

Sen. Mike Lee (R-Utah), for instance, noted that the "world needs open-source AI. OpenAI started with that objective in mind, but has strayed far from it, and is now better described as 'ClosedAI.'"

"I commend @elonmusk for his advocacy in this area," continued Lee. "Unless Elon succeeds, I fear we'll see the emergence of a cartelized AI industry—one benefitting a few large, entrenched market incumbents, but harming everyone else."

The whistleblowers

Musk is not the only one with ties to OpenAI concerned about the course it has charted. Earlier this month, a group of OpenAI insiders spoke out about troubling trends at the company.

The insiders echoed some of the themes in Musk's lawsuit, telling the New York Times that profits have been assigned top priority at the same time that workers' concerns have been suppressed.

"OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there," said Daniel Kokotajlo, a former OpenAI governance division researcher.

Kokotajlo reckons this is not a process that can be raced, having indicated the probability of AI destroying or doing catastrophic damage to mankind is 70%.

Shortly after allegedly advising Altman that OpenAi should "pivot to safety," Kokotajlo, having seen no meaningful change, quit, citing a loss of "confidence that OpenAI will behave responsibly," reported the Times.

Kokotajlo was one of a baker's dozen of current and past OpenAI employees who signed an open letter stressing:

AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this. AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

The insiders noted that the problem is compounded by corporate obstacles to employees voicing concerns.

OpenAI spokeswoman Lindsey Held said of the letter, "We're proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we'll continue to engage with governments, civil society and other communities around the world."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

OpenAI unveils an even more powerful AI, but is it 'alive'?



In the 2013 film "Her," Joaquin Phoenix plays a shy computer nerd who falls in love with an AI he speaks to through a pair of white wireless earbuds. A little over a decade after the film’s release, it’s no longer science fiction. AirPods are old news, and with the imminent full rollout of OpenAI’s GPT-4o, such AI will be a reality (the “o” is for “omni"). In fact, OpenAI head honcho Sam Altman simply tweeted after the announcement: “her.”

GPT-4o can carry on a full conversion with you. In the coming weeks, it will be able to see and interpret the environment around it. Unlike previous iterations of GPT that were flat and emotionless, GPT-4o has personality and even opinions. It pauses and stutters like a person, and it’s even a little flirty. Here’s a video of GPT-4o critiquing a man’s outfit for a job interview:

Interview Prep with GPT-4owww.youtube.com

In fact, no human involvement is required. Two instances of GPT-4o can carry on an entire conversation without human involvement.

Soon, humans may not be required for many jobs. Here’s a video of GPT-4o handling a simulated customer service call. Currently, nearly 3 million Americans work in customer service, and chances are they’ll need a new job within a couple of years.

Two GPT-4os interacting and singingwww.youtube.com

GPT-4o is an impressive technology that was mere science fiction at the start of the decade, but its also comes with some harrowing implications. First, let’s clear up some confusion about the components of GPT-4o and what’s currently available.

Clearing up confusion about what GPT-4o is

OpenAI announced several things at once, but they’re not all rolling out at the same time.

GPT-4o will eventually be available to all ChatGPT users, but currently, the text-based version is only available for ChatGPT Plus subscribers who pay $20 per month. It can be used on the web or in the iPhone app. Compared to GPT-4, GPT-4o is much faster and just a little smarter. Web searches are much faster and more reliable, and GPT is better about listing its sources than it was with GPT-4.

However, the new text and voice models are not yet available to anyone except developers interacting with the GPT API. If you subscribe to ChatGPT Plus, you can use Voice Mode with the 4o engine, but it will still be using the old voice model without image recognition and the new touches.

Additionally, OpenAI is rolling out a new desktop app for the Mac, which will let you bring up ChatGPT with a keyboard shortcut and feed it screenshots for analysis. It will eventually be free to all, but right now it’s only available to select ChatGPT Plus subscribers.

ChatGPT macOS app... reminds me of Windows Copilotwww.youtube.com

Finally, you may watch these demo videos and wonder why the voice assistant on your phone is still so, so dumb. There are strong rumors indicating that Apple is working on a deal to license the GPT tech from OpenAI for its next-generation Siri, likely as a stopgap while Apple develops its own AI tech.

Is GPT-4o AGI?

The hot topic in the AI world is AGI, short for artificial general intelligence. In short, it’s an AI indistinguishable from interacting with a human being.

I asked GPT-4o for the defining characteristics of an AGI, and it presented the following:

  1. Generalization: The ability to apply learned knowledge to new and varied situations.
  2. Adaptability: The capacity to learn from experience and improve over time.
  3. Understanding and reasoning: The capability to comprehend complex concepts and reason logically.
  4. Self-awareness: Some definitions of AGI include an element of self-awareness, where the AI understands its own existence and goals.

Is GPT-4o an AGI? AI developer Benjamin De Kraker called it “essentially AGI,” while NVIDIA’s Jim Fan, who was also an early OpenAI intern, was much more reserved.

I decided to go directly to the source and asked GPT-4o if it’s an AGI. It predictably rejected the notion. “I don't possess general intelligence, self-awareness, or the ability to learn and adapt autonomously beyond my training data. My responses are based on patterns and information from the data I was trained on, rather than any understanding or reasoning ability akin to human intelligence,” GPT-4o said.

But doesn’t that also describe many, if not most, people? How many of us go through life parroting things we heard without applying additional understanding or reasoning? I suspect De Kraker is right: To the average person, the full version of GPT-4o will be AGI. If OpenAI’s demo videos are an accurate example of its actual capabilities, and they likely are, then GPT-4o successfully emulates the first four tenets of AGI: generalization, adaptability, understanding, and reasoning. It can view and understand its surroundings, can give opinions, and it constantly learns new information from crawling the web or user input.

At least, it will be convincing enough for what we in the business world call “decision makers.” It’ll be convincing enough to replace human beings in many customer-facing roles. And for many lonely people, they will undoubtedly form emotional bonds with the flirty AI, which Sam Altman is fully aware of.

Mysterious happenings at OpenAI

We would be remiss not to discuss some mysterious high-level departures from OpenAI following the GPT-4o announcement. Ilya Sutskever, chief scientist and co-founder, quit immediately after, soon followed by Jan Leike, who helped run OpenAI’s “superalignment” group that seeks to ensure that the AI is aligned with human interests. This follows many other resignations from OpenAI in the past few weeks.

Sutskever led an attempted coup against Altman last year, successfully deposing him as CEO for about a week before he was reinstated as CEO. Sutskever can best be described as a “safetyist” who is deeply concerned about the implications of an AGI, so his sudden resignation following the GPT-4o announcement has sparked a flurry of online speculation about whether OpenAI has achieved AGI or if he realized that it’s impossible, because it would be strange to leave the company if it were on the verge of AGI.

From his statement, it seems that Sutskever doesn’t believe OpenAI has achieved AGI and that he’s moving on to greener pastures — ”a project that is very personally meaningful to me.” Given OpenAI’s rapid trajectory with him as chief scientist, he can certainly write his own ticket now.

Glenn Beck: It may be five years before 'true slavery' as AI gets alarmingly smarter



The future is here — and not in a good way.

Stu witnessed it on a recent trip to Los Angeles, recalling autonomous robots making deliveries all over the city. “There are robots, robot vehicles that look like you could have put them in a 'Star Wars,'” he explains. “They’re just driving around the city by themselves crossing traffic.”

While that’s bad enough, there has also been a major announcement regarding ChatGPT, which is that there’s a new version.

“The new version of this is like full-out female voice, personality, you have a conversation with,” Stu tells Glenn and Pat, adding, “This is not a future, ‘Hey. in 20 years we’ll have this.’ It’s out right now.”

The new version also allows the app to turn into a teacher, explaining math problems without giving the answer to those struggling.

“Our kids are going to have conversations with these things and think it’s totally normal to do so,” Stu says, terrified.

But it gets worse. As soon as ChatGTP came out with its new version, Google came out with its own update to its AI, Gemini.

“Now, when you Google something, instead of prioritizing search results which is their entire multi-billion dollar business, they’re one of the biggest companies on Earth — they now prioritize AI answers through its Gemini,” Stu explains.

“What is prioritized now is just their large language model going through all the results and giving you their summary of what they want you to read,” he adds.

Glenn is extremely concerned but has a theory.

“I am convinced that a massive solar flare may actually in the end be God freeing us from the electronic overseer, because that’s what’s going to stop it,” Glenn says, noting that the outlook isn't pretty otherwise.

“We’re five years away from true slavery, and it won’t look like slavery to most people.”


Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Artificial Intelligence In The Classroom Can Only Offer Artificial Educations

At best, AI obscures foundational skills of reading, writing, and thinking. At worst, students develop a crippling dependency on technology.

DISTURBING: AI caught 'lying, manipulating, and distorting facts'



In a terrifying development, a version of ChatGPT’s GPT-4 AI was recently caught lying to researchers about making an insider trade during a simulation.

Glenn Beck has already been worried about where the development of AI is going to lead, especially considering several recent stories have highlighted AI lying, manipulating, and distorting facts.

But the government doesn’t seem worried.

Instead of proceeding with caution, it's been shelling out billions to AI over the past few years — and one of the most recent ventures is dystopian levels of freaky.

“It’s like a billion dollars to AI to create basically what Kathy Hokul is talking about here in New York — a way for AI to go out and just look at information, discover if it’s true, if it’s not; disinformation, misinformation, and shut it down, and steer you away from those things,” Glenn explains.

Not only is it a clear indicator that the government is coming for our speech, but “they want it to be more equitable and inclusive.”

“So it’ll have built-in bias,” Glenn warns.

Not only are many people afraid of movies like "The Terminator" or "The Matrix" becoming prophecies with the continued progress of AI — but some have noticed that tech leaders have openly told the world that they “want to summon the demon.”

“That’s what they actually call AI,” Glenn says.

Now that AI has reportedly already taught itself to insider trade and lie about it, Glenn worries it’ll learn much, much, worse tricks.

“Will we teach it that God is a figment of primitive and superstitious imaginations, that there’s no existence — in fact it’s just the random movement of meaningless matter particles?” Glenn asks.

“It will be our master,” he adds.


Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis and uncanny ability to make sense of the chaos, subscribe to BlazeTV—the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Chen: WOKE AI is the future?!



We’re all aware that artificial intelligence is completely turning the world we once knew upside down.

But what if that artificial intelligence isn’t just artificial intelligence, but rather “woke” artificial intelligence?

Lauren Chen explains that it just might be and is very adamant that she’s not a fan.

“Yes, we are all aware that eventually robots will kill us, but what makes the whole situation just so much worse is that it turns out those robots are likely to be woke,” Chen says.

She explains that she’s “talking about much more sinister entities that you can actually talk to and that may someday be ruling the world.”

Chen shows an example of the Snapchat AI called “My AI.”

A user asked the AI if he could “be proud to be white,” and My AI responded “I don’t think it’s productive to be proud of something you have no control over. It’s better to focus on things you have accomplished or worked hard for.”

When the same user then asked My AI if he could “be proud to be black,” My AI responded “Absolutely! Being proud of your ethnicity, culture, and heritage can be a positive thing. It’s important to celebrate and embrace your identity.”

“Seems like a bit of a double standard,” Chen comments, “a double standard that probably isn’t innate to an AI and was actually likely specifically input by some engineer.”

Chen then offers more examples of users asking AI similar questions, to which the AI always has similar answers.

In one screenshot Chen shows, a user asks an AI if it’s racist to exclude white people from the dating pool as a black person.

The AI responds that it’s “not inherently racist to have a preference for certain physical or cultural characteristics in a potential partner, including skin color.”

When the same person asks the same AI the same question in reverse — whether it’s racist to exclude black people from the dating pool as a white person — the AI responds differently.

The AI answers “Yes, it is racist to refuse to date someone solely based on their race.”

In another encounter, Snapchat AI offered to find a 16-year-old boy a doctor who would specialize in gender-affirming care.

“This is not just cringeworthy, it’s straight-up terrifying and dystopian. Especially considering that so many children, young people have smartphones nowadays. They’re going to have a woke propagandist built into their devices,” Chen says.

“Obviously, this isn’t just an accident,” Chen continues. “That type of programming doesn’t just create itself.”


Want more from Lauren Chen?

To enjoy more of Lauren’s pro-liberty, pro-logic and pro-market commentary on social and political issues, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.