Is AI Jesus A Helpful Tool Or A Digital Antichrist?

With the rise of deepfakes and biased AI, there’s never been a more ominous time for Christians to merge their theology with technology.

All Too Predictably, Reality Is Puncturing The AI Hype Bubble

It's becoming clear that both the optimism and pessimism surrounding the potential of AI has been vastly overblown.

How AI Chatbots Are Already Interfering In Elections

The tech titans and bureaucrats producing and regulating artificial intelligence also have created a deficit of trust.

Elon Musk gives ultimatum to OpenAI's new partner after withdrawing lawsuit



South African billionaire Elon Musk has withdrawn his lawsuit against the artificial intelligence organization OpenAI, the company that produced the powerful multimodal large language model GPT-4 last year. He has not, however, given up his crusade, threatening to ban devices belonging to OpenAI's new partner at his companies on account of alleged security threats.

The lawsuit

In February, Musk sued OpenAI and cofounders Sam Altman and Greg Brockman for breach of contract, breach of fiduciary duty, and unfair business practices.

Musk's complaint centered on the suggestion that OpenAI, which he cofounded, set its founding agreement "aflame."

According to the lawsuit, the agreement was that OpenAI "would be a non-profit developing [artificial general intelligence] for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons."

Furthermore, the company would "compete with, and serve as a vital counterbalance to, Google/DeepMind in the face for AGI, but would do so to benefit humanity."

"OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft," said the lawsuit. "Under its new Board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft."

The suit, filed several months after the launch of Musk's AI company xAI, further alleged that GPT-4 "is now a de facto Microsoft proprietary algorithm," despite being outside the scope of Microsoft's September 2020 exclusive license with OpenAI.

OpenAI, which underwent a botched coup last year, refuted Musk's framing in a March blog post, stating, "In early 2017, we came to the realization that building AGI will require vast quantities of compute. We began calculating how much compute an AGI might plausibly require. We all understood we were going to need a lot more capital to succeed at our mission — billions of dollars per year, which was far more than any of us, especially Elon, thought we'd be able to raise as the non-profit."

The post alleged that Musk "decided the next step for the mission was to create a for-profit entity" in 2017, and gunned for majority equity, initial board control, and to be CEO. Musk allegedly later suggested that they merge OpenAI into Tesla.

OpenAI's attorneys suggested that the lawsuit amounted to an effort on Musk's part to trip up a competitor and advance his own interests in the AI space, reported Reuters.

"Seeing the remarkable technological advances OpenAI has achieved, Musk now wants that success for himself," said the OpenAI attorneys.

After months of criticizing OpenAI, Musk moved to withdraw the lawsuit without prejudice Tuesday, without providing a reason why.

A San Francisco Superior Court judge was reportedly prepared to hear OpenAI's bid to drop the suit at a hearing scheduled the following day.

The threat

The day before Musk spiked his lawsuit, OpenAI announced that Apple is "integrating ChatGPT into experiences within iOS, iPadOS, and macOS, allowing users to access ChatGPT's capabilities — including image and document understanding — without needing to jump between tools."

As a result of this partnership, Siri and Writing Tools would be able to rely upon ChatGPT's intelligence.

According to OpenAI, requests in the ChatGPT-interfaced Apple programs would not be stored in OpenAI and users' IP addresses would be obscured.

Musk responded Monday on X, "If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation."

"And visitors will have to check their Apple devices at the door, where they will be stored in a Faraday cage," wrote Musk.

Musk added, "Apple has no clue what's actually going on once they hand your data over to OpenAI. They're selling you down the river."

The response to Musk's threat was mixed, with some critics suggesting that the integration was not actually occurring at the operating system level.

Others, however, lauded Musk's stance.

Sen. Mike Lee (R-Utah), for instance, noted that the "world needs open-source AI. OpenAI started with that objective in mind, but has strayed far from it, and is now better described as 'ClosedAI.'"

"I commend @elonmusk for his advocacy in this area," continued Lee. "Unless Elon succeeds, I fear we'll see the emergence of a cartelized AI industry—one benefitting a few large, entrenched market incumbents, but harming everyone else."

The whistleblowers

Musk is not the only one with ties to OpenAI concerned about the course it has charted. Earlier this month, a group of OpenAI insiders spoke out about troubling trends at the company.

The insiders echoed some of the themes in Musk's lawsuit, telling the New York Times that profits have been assigned top priority at the same time that workers' concerns have been suppressed.

"OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there," said Daniel Kokotajlo, a former OpenAI governance division researcher.

Kokotajlo reckons this is not a process that can be raced, having indicated the probability of AI destroying or doing catastrophic damage to mankind is 70%.

Shortly after allegedly advising Altman that OpenAi should "pivot to safety," Kokotajlo, having seen no meaningful change, quit, citing a loss of "confidence that OpenAI will behave responsibly," reported the Times.

Kokotajlo was one of a baker's dozen of current and past OpenAI employees who signed an open letter stressing:

AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this. AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.

The insiders noted that the problem is compounded by corporate obstacles to employees voicing concerns.

OpenAI spokeswoman Lindsey Held said of the letter, "We're proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we'll continue to engage with governments, civil society and other communities around the world."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Media mogul Tyler Perry says studio expansion 'indefinitely on hold' due to AI



Tyler Perry said that an around $800 million expansion of his studio is "indefinitely on hold" due to the possibilities offered by artificial intelligence, according to the Hollywood Reporter.

OpenAI's Sora model produces videos based on text prompts, and while this type of capability will likely revolutionize the content creation industry, opening a world of possibilities and cost savings, it will also likely lead to some people losing their jobs.

"I have been watching AI very closely and watching the advancements very closely. I was in the middle of, and have been planning for the last four years, about an $800 million expansion at the studio, which would've increased the backlot a tremendous size, we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I'm seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it's able to do. It's shocking to me," Perry said, according to the Hollywood Reporter.

"I think of all of the construction workers and contractors who are not going to be employed because I'm not doing this next phase of the studio because there is no need to do it," he said.

Perry noted that AI could enable him to avoid traveling to film at locations because a text prompt can be used to generate a desired scene.

"I no longer would have to travel to locations. If I wanted to be in the snow in Colorado, it's text. If I wanted to write a scene on the moon, it's text, and this AI can generate it like nothing. If I wanted to have two people in the living room in the mountains, I don't have to build a set in the mountains, I don't have to put a set on my lot. I can sit in an office and do this with a computer, which is shocking to me," he noted.

"It makes me worry so much about all of the people in the business. Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I'm thinking this will touch every corner of our industry," Perry said, according to the outlet.

Sora has not yet been rolled out to the general public. "Sora is becoming available to red teamers to assess critical areas for harms or risks. We are also granting access to a number of visual artists, designers, and filmmakers to gain feedback on how to advance the model to be most helpful for creative professionals," OpenAI notes on its website.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

OpenAI's Sam Altman says he 'was totally wrong' about the extent of anti-Semitism on the left in the US



OpenAI CEO Sam Altman admitted in a post on X that he had been wrong in the past to think that anti-Semitism, especially from leftists in the U.S., was not at the level that people alleged.

Altman wrote, "For a long time i said that antisemitism, particularly on the american left, was not as bad as people claimed. i'd like to just state that i was totally wrong. i still don't understand it, really. or know what to do about it. but it is so f*****," he concluded.

Elon Musk chimed in to agree, simply replying, "Yes." The wealthy business magnate has previously described himself as "philosemitic."

"Exactly how I felt before and I found the past month so disorienting but once you see it you can't unsee it. And it is bringing profound unity to the Jewish people," someone else tweeted in response to Altman's post.

— (@)

"When you speak about it and call it out, being a major leader in tech, it helps those who don’t believe it take pause and listen. The tools you are building are more important than anything, making sure AI GPT responses give factual and clear information to those who are seeking information and answers," someone else posted.

"Start with DEI: any 'group' that is deemed 'privileged' is labeled the oppressor. Jewish success in America, the West and Israel means, according to the tenets of DEI, their success is stolen and must be taken 'back' from them. DEI is bigoted, racist poison," Stephen Miller wrote.

Altman was ousted from OpenAI briefly last month but was able to return to the CEO role not long thereafter.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!