OpenAI unveils an even more powerful AI, but is it 'alive'?



In the 2013 film "Her," Joaquin Phoenix plays a shy computer nerd who falls in love with an AI he speaks to through a pair of white wireless earbuds. A little over a decade after the film’s release, it’s no longer science fiction. AirPods are old news, and with the imminent full rollout of OpenAI’s GPT-4o, such AI will be a reality (the “o” is for “omni"). In fact, OpenAI head honcho Sam Altman simply tweeted after the announcement: “her.”

GPT-4o can carry on a full conversion with you. In the coming weeks, it will be able to see and interpret the environment around it. Unlike previous iterations of GPT that were flat and emotionless, GPT-4o has personality and even opinions. It pauses and stutters like a person, and it’s even a little flirty. Here’s a video of GPT-4o critiquing a man’s outfit for a job interview:

Interview Prep with GPT-4owww.youtube.com

In fact, no human involvement is required. Two instances of GPT-4o can carry on an entire conversation without human involvement.

Soon, humans may not be required for many jobs. Here’s a video of GPT-4o handling a simulated customer service call. Currently, nearly 3 million Americans work in customer service, and chances are they’ll need a new job within a couple of years.

Two GPT-4os interacting and singingwww.youtube.com

GPT-4o is an impressive technology that was mere science fiction at the start of the decade, but its also comes with some harrowing implications. First, let’s clear up some confusion about the components of GPT-4o and what’s currently available.

Clearing up confusion about what GPT-4o is

OpenAI announced several things at once, but they’re not all rolling out at the same time.

GPT-4o will eventually be available to all ChatGPT users, but currently, the text-based version is only available for ChatGPT Plus subscribers who pay $20 per month. It can be used on the web or in the iPhone app. Compared to GPT-4, GPT-4o is much faster and just a little smarter. Web searches are much faster and more reliable, and GPT is better about listing its sources than it was with GPT-4.

However, the new text and voice models are not yet available to anyone except developers interacting with the GPT API. If you subscribe to ChatGPT Plus, you can use Voice Mode with the 4o engine, but it will still be using the old voice model without image recognition and the new touches.

Additionally, OpenAI is rolling out a new desktop app for the Mac, which will let you bring up ChatGPT with a keyboard shortcut and feed it screenshots for analysis. It will eventually be free to all, but right now it’s only available to select ChatGPT Plus subscribers.

ChatGPT macOS app... reminds me of Windows Copilotwww.youtube.com

Finally, you may watch these demo videos and wonder why the voice assistant on your phone is still so, so dumb. There are strong rumors indicating that Apple is working on a deal to license the GPT tech from OpenAI for its next-generation Siri, likely as a stopgap while Apple develops its own AI tech.

Is GPT-4o AGI?

The hot topic in the AI world is AGI, short for artificial general intelligence. In short, it’s an AI indistinguishable from interacting with a human being.

I asked GPT-4o for the defining characteristics of an AGI, and it presented the following:

  1. Generalization: The ability to apply learned knowledge to new and varied situations.
  2. Adaptability: The capacity to learn from experience and improve over time.
  3. Understanding and reasoning: The capability to comprehend complex concepts and reason logically.
  4. Self-awareness: Some definitions of AGI include an element of self-awareness, where the AI understands its own existence and goals.

Is GPT-4o an AGI? AI developer Benjamin De Kraker called it “essentially AGI,” while NVIDIA’s Jim Fan, who was also an early OpenAI intern, was much more reserved.

I decided to go directly to the source and asked GPT-4o if it’s an AGI. It predictably rejected the notion. “I don't possess general intelligence, self-awareness, or the ability to learn and adapt autonomously beyond my training data. My responses are based on patterns and information from the data I was trained on, rather than any understanding or reasoning ability akin to human intelligence,” GPT-4o said.

But doesn’t that also describe many, if not most, people? How many of us go through life parroting things we heard without applying additional understanding or reasoning? I suspect De Kraker is right: To the average person, the full version of GPT-4o will be AGI. If OpenAI’s demo videos are an accurate example of its actual capabilities, and they likely are, then GPT-4o successfully emulates the first four tenets of AGI: generalization, adaptability, understanding, and reasoning. It can view and understand its surroundings, can give opinions, and it constantly learns new information from crawling the web or user input.

At least, it will be convincing enough for what we in the business world call “decision makers.” It’ll be convincing enough to replace human beings in many customer-facing roles. And for many lonely people, they will undoubtedly form emotional bonds with the flirty AI, which Sam Altman is fully aware of.

Mysterious happenings at OpenAI

We would be remiss not to discuss some mysterious high-level departures from OpenAI following the GPT-4o announcement. Ilya Sutskever, chief scientist and co-founder, quit immediately after, soon followed by Jan Leike, who helped run OpenAI’s “superalignment” group that seeks to ensure that the AI is aligned with human interests. This follows many other resignations from OpenAI in the past few weeks.

Sutskever led an attempted coup against Altman last year, successfully deposing him as CEO for about a week before he was reinstated as CEO. Sutskever can best be described as a “safetyist” who is deeply concerned about the implications of an AGI, so his sudden resignation following the GPT-4o announcement has sparked a flurry of online speculation about whether OpenAI has achieved AGI or if he realized that it’s impossible, because it would be strange to leave the company if it were on the verge of AGI.

From his statement, it seems that Sutskever doesn’t believe OpenAI has achieved AGI and that he’s moving on to greener pastures — ”a project that is very personally meaningful to me.” Given OpenAI’s rapid trajectory with him as chief scientist, he can certainly write his own ticket now.

OpenAI’s Sam Altman: Tech savior or tomorrow's supervillain?



Spend enough time around Silicon Valley these days, and you’ll hear a surprising thing — the V-word, villain, used to describe what would seem to be one of their own. Not every tech lord, venture capitalist, and founder sees OpenAI’s Sam Altman, the creator of ChatGPT, as a for-real bad guy, but more do than you might expect. The feeling is palpable, now that Altman speaks openly of raising $8 trillion, that today’s villain is well on his way to becoming tomorrow’s supervillain.

It’s an attitude arrestingly close to “doomer” status — the pessimistic attitude toward the onrushing future most techies decry in the name of a-rising-tide-lifts-all-boats optimism about innovation-driven progress. But even without diving into the progress debate, Altman’s uncanny advancement as the rare guy felt in the Valley to be suspect ethically raises significant questions about what can stop humanity’s human villains from accelerating us into a specifically spiritual catastrophe.

The face of our digitally manifested “collective consciousness” isn’t that of an autistic new Enlightenment. It’s schizoid pandemonium.

A fascinating piece of evidence is the euphoria surrounding OpenAI’s latest prompt-to-video product. Sora is a feature that will turn text into AI-generated videos. A series of sample clips triggered a wave of soyfacing and blown minds to rival the comparisons drawn by Apple Vision Pro testers to some kind of religious experience. “Hollywood-quality” ... “Hollywood beware” ... “RIP Hollywood” ... you can probably spend an hour on X just working through techland assessments of Sora’s impending impact. “This is the worst this technology will ever be.”

More mind blowing Sora videos from the OpenAI team\n\n1. Flower tiger
— (@)

There are skeptics, of course. Lauren Southern, who couldn’t get ChatGPT to “generate text with the word ‘libs’ in it,” mocked Sora’s prospects for sinking “woke Hollywood,” predicting “an age of censorship and gov curation the likes of which we’ve never seen before.”

The deeper issue is what exactly we mean by “Hollywood” — a matter akin to what exactly we mean by “the media.” These abstractions refer to corporations, of course, and in that sense, yes — Sora and its inevitable clones might make obsolete corporate mass entertainment in exchange for products directly from the regime itself.

But here we are again talking about abstractions. Hollywood, the media, and the regime are not simply organizations and baskets or networks of organizations, but people, specific flesh-and-blood human beings, with various spiritual lives in varying degrees of distress.

Innovations like Sora don’t just raise questions about which group of people will seize or inherit control of these video and narrative creation tools. They raise questions about whether the automation of content will cause more of us to believe that our spiritual health demands a turn away from worshipful or obsessive attitudes toward narrative altogether.

The dominance of Hollywood, Madison Avenue, and government propaganda arose amidst the televisual forms of communications technology that digital tech has leaped over. The people filling the image-mongering ranks and narrative-shaping executive offices of Los Angeles, New York, and Washington, D.C., came of age and rose to mastery in a world where whoever controlled the means of dream production held sway and whoever dreamed the biggest and best dreams earned an ethical right to rule.

But that state of affairs wasn’t simply determined by the formative influence of televisual tech. Fundamentally, it arose from the temptations that always bedevil us and threaten our spiritual health — not just the sparkling promise of evil and its earthly rewards but our dreams, senses, and passions.

Of course, it’s not our ability to see, smell, and taste, our imaginative and recollective faculties, or our capacity to desire that are evil. It’s that when spiritually undisciplined, all these attributes — which we so frequently idolize, trust, and artificially push to extremes — lead us badly astray into delusion, distraction, addiction, and perversion.

The rise of tools like Sora holds up an uncanny mirror to the idol factories already within in our hearts and minds, giving us a shocking vision of an infinite firehose mindlessly filling up every cranny of our awareness with everything we could ever lust after, everything we could ever describe, all we could fear, all we could imagine, all we could forget — all without us having to lift a finger.

After all, today’s text-based prompting will “eventually” give way, as Mark Zuckerberg recently and offhandedly remarked about Meta’s Apple Vision Pro competitor, to “a neural interface.” The face of our digitally manifested “collective consciousness” isn’t that of an autistic new Enlightenment. It’s schizoid pandemonium.

It all strongly implies that the antidote to Altman isn’t a law or an Iron Man-style superhero but a return to confront the soul sickness lurking in all our hearts and a sobered new willingness to accept responsibility for taking on the discipline to bend our will toward fighting for our spiritual health.

That’s not a very amaaaaazing elevator pitch for the next generation of content creation. Yet if we want to hang on to a future rich with human art worth making and sharing, our path won’t run broadly through a mania of mind-blowing machines but through the quiet, narrow passage of the divine.

OpenAI's Sam Altman says he 'was totally wrong' about the extent of anti-Semitism on the left in the US



OpenAI CEO Sam Altman admitted in a post on X that he had been wrong in the past to think that anti-Semitism, especially from leftists in the U.S., was not at the level that people alleged.

Altman wrote, "For a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i'd like to just state that i was totally wrong. i still don't understand it, really. or know what to do about it. but it is so f*****," he concluded.

Elon Musk chimed in to agree, simply replying, "Yes." The wealthy business magnate has previously described himself as "philosemitic."

"Exactly how I felt before and I found the past month so disorienting but once you see it you can't unsee it. And it is bringing profound unity to the Jewish people," someone else tweeted in response to Altman's post.

— (@)

"When you speak about it and call it out, being a major leader in tech, it helps those who don’t believe it take pause and listen. The tools you are building are more important than anything, making sure AI GPT responses give factual and clear information to those who are seeking information and answers," someone else posted.

"Start with DEI: any 'group' that is deemed 'privileged' is labeled the oppressor. Jewish success in America, the West and Israel means, according to the tenets of DEI, their success is stolen and must be taken 'back' from them. DEI is bigoted, racist poison," Stephen Miller wrote.

Altman was ousted from OpenAI briefly last month but was able to return to the CEO role not long thereafter.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

New AI Chatbot Covers For Biden, Says Rachel Levine Is A Woman. Can It Replace The Washington Post?

With all the misinformation pouring out of ChatGPI, one suspects it could soon replace every journalist at The Washington Post.