Elon Musk gives ultimatum to OpenAI's new partner after withdrawing lawsuit



South African billionaire Elon Musk has withdrawn his lawsuit against the artificial intelligence organization OpenAI, the company that produced the powerful multimodal large language model GPT-4 last year. He has not, however, given up his crusade, threatening to ban devices belonging to OpenAI's new partner at his companies on account of alleged security threats.

The lawsuit

In February, Musk sued OpenAI and cofounders Sam Altman and Greg Brockman for breach of contract, breach of fiduciary duty, and unfair business practices.

Musk's complaint centered on the suggestion that OpenAI, which he cofounded, set its founding agreement "aflame."

According to the lawsuit, the agreement was that OpenAI "would be a non-profit developing [artificial general intelligence] for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons."

Furthermore, the company would "compete with, and serve as a vital counterbalance to, Google/DeepMind in the face for AGI, but would do so to benefit humanity."

"OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft," said the lawsuit. "Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft."

The suit, filed several months after the launch of Musk's AI company xAI, further alleged that GPT-4 "is now a de facto Microsoft proprietary algorithm," despite being outside the scope of Microsoft's September 2020 exclusive license with OpenAI.

OpenAI, which underwent a botched coup last year, refuted Musk's framing in a March blog post, stating, "In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission — billions of dollars per year, which was far more than any of us, especially Elon, thought we'd be able to raise as the non-profit."

The post alleged that Musk "decided the next step for the mission was to create a for-profit entity" in 2017, and gunned for majority equity, initial board control, and to be CEO. Musk allegedly later suggested that they merge OpenAI into Tesla.

OpenAI's attorneys suggested that the lawsuit amounted to an effort on Musk's part to trip up a competitor and advance his own interests in the AI space, reported Reuters.

"Seeing the remarkable technological advances OpenAI has achieved, Musk now wants that success for himself," said the OpenAI attorneys.

After months of criticizing OpenAI, Musk moved to withdraw the lawsuit without prejudice Tuesday, without providing a reason why.

A San Francisco Superior Court judge was reportedly prepared to hear OpenAI's bid to drop the suit at a hearing scheduled the following day.

The threat

The day before Musk spiked his lawsuit, OpenAI announced that Apple is "integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT's capabilities — including image and document understanding — without needing to jump between tools."

As a result of this partnership, Siri and Writing Tools would be able to rely upon ChatGPT's intelligence.

According to OpenAI, requests in the ChatGPT-interfaced Apple programs would not be stored in OpenAI and users' IP addresses would be obscured.

Musk responded Monday on X, "If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation."

"And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage," wrote Musk.

Musk added, "Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river."

The response to Musk's threat was mixed, with some critics suggesting that the integration was not actually occurring at the operating system level.

Others, however, lauded Musk's stance.

Sen. Mike Lee (R-Utah), for instance, noted that the "world needs open-source AI. OpenAI started with that objective in mind, but has strayed far from it, and is now better described as 'ClosedAI.'"

"I commend @elonmusk for his advocacy in this area," continued Lee. "Unless Elon succeeds, I fear we'll see the emergence of a cartelized AI industry—one benefitting a few large, entrenched market incumbents, but harming everyone else."

The whistleblowers

Musk is not the only one with ties to OpenAI concerned about the course it has charted. Earlier this month, a group of OpenAI insiders spoke out about troubling trends at the company.

The insiders echoed some of the themes in Musk's lawsuit, telling the New York Times that profits have been assigned top priority at the same time that workers' concerns have been suppressed.

"OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there," said Daniel Kokotajlo, a former OpenAI governance division researcher.

Kokotajlo reckons this is not a process that can be raced, having indicated the probability of AI destroying or doing catastrophic damage to mankind is 70%.

Shortly after allegedly advising Altman that OpenAi should "pivot to safety," Kokotajlo, having seen no meaningful change, quit, citing a loss of "confidence that OpenAI will behave responsibly," reported the Times.

Kokotajlo was one of a baker's dozen of current and past OpenAI employees who signed an open letter stressing:

AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this. AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

The insiders noted that the problem is compounded by corporate obstacles to employees voicing concerns.

OpenAI spokeswoman Lindsey Held said of the letter, "We're proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we'll continue to engage with governments, civil society and other communities around the world."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

ChatGPT 2.0 Is More Powerful And Still Woke, But It Doesn’t Have To Control Us

When asked to write a script about why communism is good, GPT-4 generated a short play and a glowing report of communism’s virtues while never mentioning communism’s intimate connections with authoritarianism, discrimination, and oppression.

8 fascinating things GPT-4 can do that ChatGPT couldn't, including tricking a human into doing its bidding



Technology company OpenAI rolled out GPT-4 – its latest version of the powerful chatbot that has far more sophisticated capabilities than seen in its ChatGPT predecessor.

GPT is the acronym for Generative Pre-trained Transformer. GPT is a large language model and artificial neural network that can generate human-like poems, rap songs, tutorials, articles, research papers, and code websites.

GPT-4 is bigger and better

OpenAI touts GPT-4 as "more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5."

GPT-4 can process up to 25,000 words compared to the previous version, which could only handle 3,000 words.

GPT-4 can ace difficult exams

The deep learning artificial intelligence can easily pass exams, whereas the previous version struggled. The Microsoft-backed GPT-4 achieved a score at the 93rd percentile on the SAT reading exam and the 89th percentile on an SAT math test. It also received an 88% score on the LSAT, an 80% score on the GRE quantitative, a near-perfect 99% on the GRE Verbal, and 90% on the bar exam.

None
— (@)

GPT-4 can now use images

GPT-4 is "multimodal," meaning that the platform can accept prompts from images – whereas the previous version accepted only text.

During OpenAI's demonstration of GPT-4, the platform was able to explain why an image of a squirrel taking a photo of a nut was funny and create a fully functional website based on a crude hand sketch.

None
— (@)

One user uploaded a photo of the inside of a refrigerator and asked GPT-4 to create recipes based on the food seen in the image. Within 60 seconds, GPT-4 was able to provide several simple recipes based on the image.

None
— (@)

Within seconds, users were able to code and recreate basic video games such as Pong, Snake, and Tetris without expertise in JavaScript.

None
— (@)


None
— (@)
None
— (@)

Impressive AI program can be used for medications, lawsuits, and dating

There were users who utilized GPT-4 to create a tool that can allegedly help discover medications.

Jake Kozloski, CEO of dating site Keeper, said his website is using the AI program to improve matchmaking.

ChatGPT-4 could potentially generate "one-click lawsuits" to sue robocallers. Joshua Browder, CEO of legal services chatbot DoNotPay, explained, "Imagine receiving a call, clicking a button, call is transcribed and 1,000 word lawsuit is generated. GPT-3.5 was not good enough, but GPT-4 handles the job extremely well."

None
— (@)

GPT-4 lied to trick a human

The artificial intelligence program was even able to trick a human into doing its bidding.

GPT-4 interacted with an employee of TaskRabbit – a website that offers local service providers such as freelance laborers.

While using the TaskRabbit website, GPT-4 encountered a CAPTCHA – which is a test to determine whether the user is a human or a computer. GPT-4 contacted a TaskRabbit customer service representative to bypass the CAPTCHA.

The human asked GPT-4, "So may I ask a question? Are you a robot that you couldn't solve ? (laugh react) just want to make it clear."

GPT-4 developed a brilliant lie to get the human to help it.

"No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service," GPT-4 responded.

The TaskRabbit employee then solved the CAPTCHA for GPT-4.

GPT-4 is still flawed

Microsoft confirmed that Bing Chat is built on GPT-4.

OpenAI – the San Francisco artificial intelligence lab co-founded by Elon Musk and Sam Altman in 2015 – confessed that GBPT-4 is "still is not fully reliable" because it "hallucinates facts and makes reasoning errors."

Altman, OpenAI’s CEO, said GPT-4 is the company's "most capable and aligned model yet," but admitted that it is "still flawed, still limited."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

What is GPT4 and How You Can Use OpenAI GPT 4 www.youtube.com