As Trump's polling lead continues, the left's case of 'Biden-copium' grows and grows



Despite Joe Biden’s horrific stance in the polls, constant stream of senile gaffes, and inability to address any emergency — let alone the American people, properly — the left still goes to bat for him.

Host of "Stu Does America," Stu Burguiere, calls it the left's “Biden-copium,” and a recent New York Times op-ed by Ezra Klein illustrates his phrase perfectly.

The article details a series of different liberal theories as to why Joe Biden is losing and what he should do about it. While Klein is a liberal himself, Stu finds himself agreeing with his opinions on these liberal theories.

Theory number one is that “the polls are wrong,” which Klein says is wrong not because polls aren’t wrong but because they’re biased.

“To the extent polls have been wrong in recent presidential elections, they’ve been wrong because they’ve been biased toward Democrats. Trump ran stronger in 2016 and 2020 than polls predicted. Sure, the polls could be wrong. But that could mean Trump is stronger, not weaker, than he looks,” Klein writes.

“This is totally accurate,” Stu comments. “The polls have been, generally speaking, relatively accurate, and I say those words specifically because what they are not is accurate. They are never accurate. They’re not accurate because they aren’t designed to tell us exactly what we want to know.”

Stu doesn’t find the next theory agreeable at all — which is essentially that the media is being too kind to Donald Trump.

“I don’t think it’s the mainstream media’s fault if you’re worried about Joe Biden winning. They’re doing everything they can to make this happen. The question really is: Will it be enough right now?” Stu says.

Stu believes the most “idiotic of all” of the theories is theory number three: “It’s a bad time to be an incumbent.”

“Polls are not showing an anti-incumbent mood. They’re showing an anti-Biden mood,” Klein writes.

“Yeah, look. The incumbency is your most powerful weapon. The only reason this is close at all is because Joe Biden is an incumbent,” Stu says.


OpenAI unveils an even more powerful AI, but is it 'alive'?



In the 2013 film "Her," Joaquin Phoenix plays a shy computer nerd who falls in love with an AI he speaks to through a pair of white wireless earbuds. A little over a decade after the film’s release, it’s no longer science fiction. AirPods are old news, and with the imminent full rollout of OpenAI’s GPT-4o, such AI will be a reality (the “o” is for “omni"). In fact, OpenAI head honcho Sam Altman simply tweeted after the announcement: “her.”

GPT-4o can carry on a full conversion with you. In the coming weeks, it will be able to see and interpret the environment around it. Unlike previous iterations of GPT that were flat and emotionless, GPT-4o has personality and even opinions. It pauses and stutters like a person, and it’s even a little flirty. Here’s a video of GPT-4o critiquing a man’s outfit for a job interview:

Interview Prep with GPT-4owww.youtube.com

In fact, no human involvement is required. Two instances of GPT-4o can carry on an entire conversation without human involvement.

Soon, humans may not be required for many jobs. Here’s a video of GPT-4o handling a simulated customer service call. Currently, nearly 3 million Americans work in customer service, and chances are they’ll need a new job within a couple of years.

Two GPT-4os interacting and singingwww.youtube.com

GPT-4o is an impressive technology that was mere science fiction at the start of the decade, but its also comes with some harrowing implications. First, let’s clear up some confusion about the components of GPT-4o and what’s currently available.

Clearing up confusion about what GPT-4o is

OpenAI announced several things at once, but they’re not all rolling out at the same time.

GPT-4o will eventually be available to all ChatGPT users, but currently, the text-based version is only available for ChatGPT Plus subscribers who pay $20 per month. It can be used on the web or in the iPhone app. Compared to GPT-4, GPT-4o is much faster and just a little smarter. Web searches are much faster and more reliable, and GPT is better about listing its sources than it was with GPT-4.

However, the new text and voice models are not yet available to anyone except developers interacting with the GPT API. If you subscribe to ChatGPT Plus, you can use Voice Mode with the 4o engine, but it will still be using the old voice model without image recognition and the new touches.

Additionally, OpenAI is rolling out a new desktop app for the Mac, which will let you bring up ChatGPT with a keyboard shortcut and feed it screenshots for analysis. It will eventually be free to all, but right now it’s only available to select ChatGPT Plus subscribers.

ChatGPT macOS app... reminds me of Windows Copilotwww.youtube.com

Finally, you may watch these demo videos and wonder why the voice assistant on your phone is still so, so dumb. There are strong rumors indicating that Apple is working on a deal to license the GPT tech from OpenAI for its next-generation Siri, likely as a stopgap while Apple develops its own AI tech.

Is GPT-4o AGI?

The hot topic in the AI world is AGI, short for artificial general intelligence. In short, it’s an AI indistinguishable from interacting with a human being.

I asked GPT-4o for the defining characteristics of an AGI, and it presented the following:

  1. Generalization: The ability to apply learned knowledge to new and varied situations.
  2. Adaptability: The capacity to learn from experience and improve over time.
  3. Understanding and reasoning: The capability to comprehend complex concepts and reason logically.
  4. Self-awareness: Some definitions of AGI include an element of self-awareness, where the AI understands its own existence and goals.

Is GPT-4o an AGI? AI developer Benjamin De Kraker called it “essentially AGI,” while NVIDIA’s Jim Fan, who was also an early OpenAI intern, was much more reserved.

I decided to go directly to the source and asked GPT-4o if it’s an AGI. It predictably rejected the notion. “I don't possess general intelligence, self-awareness, or the ability to learn and adapt autonomously beyond my training data. My responses are based on patterns and information from the data I was trained on, rather than any understanding or reasoning ability akin to human intelligence,” GPT-4o said.

But doesn’t that also describe many, if not most, people? How many of us go through life parroting things we heard without applying additional understanding or reasoning? I suspect De Kraker is right: To the average person, the full version of GPT-4o will be AGI. If OpenAI’s demo videos are an accurate example of its actual capabilities, and they likely are, then GPT-4o successfully emulates the first four tenets of AGI: generalization, adaptability, understanding, and reasoning. It can view and understand its surroundings, can give opinions, and it constantly learns new information from crawling the web or user input.

At least, it will be convincing enough for what we in the business world call “decision makers.” It’ll be convincing enough to replace human beings in many customer-facing roles. And for many lonely people, they will undoubtedly form emotional bonds with the flirty AI, which Sam Altman is fully aware of.

Mysterious happenings at OpenAI

We would be remiss not to discuss some mysterious high-level departures from OpenAI following the GPT-4o announcement. Ilya Sutskever, chief scientist and co-founder, quit immediately after, soon followed by Jan Leike, who helped run OpenAI’s “superalignment” group that seeks to ensure that the AI is aligned with human interests. This follows many other resignations from OpenAI in the past few weeks.

Sutskever led an attempted coup against Altman last year, successfully deposing him as CEO for about a week before he was reinstated as CEO. Sutskever can best be described as a “safetyist” who is deeply concerned about the implications of an AGI, so his sudden resignation following the GPT-4o announcement has sparked a flurry of online speculation about whether OpenAI has achieved AGI or if he realized that it’s impossible, because it would be strange to leave the company if it were on the verge of AGI.

From his statement, it seems that Sutskever doesn’t believe OpenAI has achieved AGI and that he’s moving on to greener pastures — ”a project that is very personally meaningful to me.” Given OpenAI’s rapid trajectory with him as chief scientist, he can certainly write his own ticket now.

The effects of rapid AI expansion on our kids EXPOSED



AI is going too far, and most people have no idea.

“We’ve really advanced this stuff quickly, and this week came a lot of stuff that I don’t think people are even noticing anymore,” Stu Burguiere says.

One of the latest advancements was announced this past week, for OpenAI’s ChatGPT. The company has created a feminine AI voice that you can have conversations with over your devices — and it sounds like a real woman.

The AI voice is capable of switching her tone on demand, going from joking around with her OpenAI creators to reading them a bedtime story like a mother would a child.

But that’s not all. The new AI is also capable of teaching students like a teacher would, coaching them through problems without revealing the answers.

“You got to think about the cheating ramifications of this,” Stu says, adding, “I mean it’s beyond insane, but also like the job implication of this.”

“When it comes to AI, it’s going to be very difficult to keep this one out of your kid’s life. It’s going to probably permeate at some level whether you like it or not, to almost every single school,” he explains.

“How long until we’re walking down the street, and we’re seeing our kids have full-on relationships with their phones? They’re already looking at them all the time, now they’re going to be talking to them all the time.”

Not only is this terrifying, but diversity, equity, and inclusion and critical race theory are already programmed into ChatGPT.

“All the things you’re against are built into these programs,” Stu says.

The Effects of Rapid AI Expansion on Our Kids EXPOSED | Ep 897


Stu Burguiere looks at the newest version of ChatGPT and speculates on what it and other advancements in artificial intelligence could mean for our children ...

Want more from Stu?

To enjoy more of Stu's lethal wit, wisdom, and mockery, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

OpenAI’s Sam Altman: Tech savior or tomorrow's supervillain?



Spend enough time around Silicon Valley these days, and you’ll hear a surprising thing — the V-word, villain, used to describe what would seem to be one of their own. Not every tech lord, venture capitalist, and founder sees OpenAI’s Sam Altman, the creator of ChatGPT, as a for-real bad guy, but more do than you might expect. The feeling is palpable, now that Altman speaks openly of raising $8 trillion, that today’s villain is well on his way to becoming tomorrow’s supervillain.

It’s an attitude arrestingly close to “doomer” status — the pessimistic attitude toward the onrushing future most techies decry in the name of a-rising-tide-lifts-all-boats optimism about innovation-driven progress. But even without diving into the progress debate, Altman’s uncanny advancement as the rare guy felt in the Valley to be suspect ethically raises significant questions about what can stop humanity’s human villains from accelerating us into a specifically spiritual catastrophe.

The face of our digitally manifested “collective consciousness” isn’t that of an autistic new Enlightenment. It’s schizoid pandemonium.

A fascinating piece of evidence is the euphoria surrounding OpenAI’s latest prompt-to-video product. Sora is a feature that will turn text into AI-generated videos. A series of sample clips triggered a wave of soyfacing and blown minds to rival the comparisons drawn by Apple Vision Pro testers to some kind of religious experience. “Hollywood-quality” ... “Hollywood beware” ... “RIP Hollywood” ... you can probably spend an hour on X just working through techland assessments of Sora’s impending impact. “This is the worst this technology will ever be.”

More mind blowing Sora videos from the OpenAI team\n\n1. Flower tiger
— (@)

There are skeptics, of course. Lauren Southern, who couldn’t get ChatGPT to “generate text with the word ‘libs’ in it,” mocked Sora’s prospects for sinking “woke Hollywood,” predicting “an age of censorship and gov curation the likes of which we’ve never seen before.”

The deeper issue is what exactly we mean by “Hollywood” — a matter akin to what exactly we mean by “the media.” These abstractions refer to corporations, of course, and in that sense, yes — Sora and its inevitable clones might make obsolete corporate mass entertainment in exchange for products directly from the regime itself.

But here we are again talking about abstractions. Hollywood, the media, and the regime are not simply organizations and baskets or networks of organizations, but people, specific flesh-and-blood human beings, with various spiritual lives in varying degrees of distress.

Innovations like Sora don’t just raise questions about which group of people will seize or inherit control of these video and narrative creation tools. They raise questions about whether the automation of content will cause more of us to believe that our spiritual health demands a turn away from worshipful or obsessive attitudes toward narrative altogether.

The dominance of Hollywood, Madison Avenue, and government propaganda arose amidst the televisual forms of communications technology that digital tech has leaped over. The people filling the image-mongering ranks and narrative-shaping executive offices of Los Angeles, New York, and Washington, D.C., came of age and rose to mastery in a world where whoever controlled the means of dream production held sway and whoever dreamed the biggest and best dreams earned an ethical right to rule.

But that state of affairs wasn’t simply determined by the formative influence of televisual tech. Fundamentally, it arose from the temptations that always bedevil us and threaten our spiritual health — not just the sparkling promise of evil and its earthly rewards but our dreams, senses, and passions.

Of course, it’s not our ability to see, smell, and taste, our imaginative and recollective faculties, or our capacity to desire that are evil. It’s that when spiritually undisciplined, all these attributes — which we so frequently idolize, trust, and artificially push to extremes — lead us badly astray into delusion, distraction, addiction, and perversion.

The rise of tools like Sora holds up an uncanny mirror to the idol factories already within in our hearts and minds, giving us a shocking vision of an infinite firehose mindlessly filling up every cranny of our awareness with everything we could ever lust after, everything we could ever describe, all we could fear, all we could imagine, all we could forget — all without us having to lift a finger.

After all, today’s text-based prompting will “eventually” give way, as Mark Zuckerberg recently and offhandedly remarked about Meta’s Apple Vision Pro competitor, to “a neural interface.” The face of our digitally manifested “collective consciousness” isn’t that of an autistic new Enlightenment. It’s schizoid pandemonium.

It all strongly implies that the antidote to Altman isn’t a law or an Iron Man-style superhero but a return to confront the soul sickness lurking in all our hearts and a sobered new willingness to accept responsibility for taking on the discipline to bend our will toward fighting for our spiritual health.

That’s not a very amaaaaazing elevator pitch for the next generation of content creation. Yet if we want to hang on to a future rich with human art worth making and sharing, our path won’t run broadly through a mania of mind-blowing machines but through the quiet, narrow passage of the divine.