ChatGPT says it is not sharing your conversations with advertisers, but there's a catch



OpenAI says it will not sell user data to advertisers, but that does not mean it won't sell advertisers to its users.

Users of ChatGPT's free online service are about to experience the end of a golden era, with the company saying it is going to "level the playing field" for advertisers.

'We never sell your data to advertisers.'

OpenAI said it will allow "anyone to create high-quality experiences" to help users "discover options they might never have found otherwise."

This is the long way to announce that it will now start serving ads to users at the bottom of their ChatGPT conversations.

"We plan to test ads at the bottom of answers in ChatGPT when there's a relevant sponsored product or service based on your current conversation," the tech company said in a press release.

The company provided an example of what users can expect to see, showcasing an ad for hot sauce at the bottom of a user prompt for Mexican food ideas for a dinner party; the ad takes up about 40% of the user's phone-screen space.

In what are likely to be used as work-arounds for an ad-free experience, OpenAI listed the few occasions in which it will not serve ads.

RELATED: Microsoft CEO: AI 'slop' is good for you — or at least for your 'human potential'

Photo by Jaap Arriens/NurPhoto via Getty Images

Users who are under 18 years old or believed to be under 18 will not be served ads. Neither will users who are discussing "sensitive or regulated topics like health, mental health, or politics."

Included in the introduction of ads was convincing the users their data would be safe. Therefore, OpenAI noted that ads would not influence the answers that ChatGPT provides, and user conversations will be kept "private from advertisers."

"We never sell your data to advertisers," the company wrote.

This was a key feature of OpenAI's "Ad Principles," which sought to convince readers that implementing ads is part of its mission to ensure its platform "benefits all of humanity" and makes AI more accessible.

Business, Enterprise, Plus and Pro subscriptions will not include ads. That means users will have to fork over at least $20 per month to avoid them; those using ChatGPT for free or under the $8/month Go plan will see them.

RELATED: Grok's deepfake scandals are putting America's future at risk

Photo by Indranil Aditya/NurPhoto via Getty Images

Much of OpenAI's announcement focused on how the new ads will actually help the user, explaining that they will be more helpful and tailor-made than "any other ads."

While this may push some users to free platforms, some may enjoy the ability to speak directly with the interface to make purchasing decisions, even though this may inevitably result in users being pushed to buy certain products.

"Conversational interfaces create possibilities for people to go beyond static messages and links. For example, soon you might see an ad and be able to directly ask the questions you need to make a purchase decision," OpenAI wrote.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Validated ... paranoid delusions about his own mother': Murder victim's heirs file lawsuit against OpenAI



Stein-Erik Soelberg, a 56-year-old former Yahoo executive, killed his mother and then himself in early August in Old Greenwich. Now, his mother's estate has sued OpenAI's ChatGPT and its biggest investor, Microsoft, for ChatGPT's alleged role in the killings.

On Thursday, the heirs of 83-year-old Suzanne Eberson Adams filed a wrongful death suit in California Superior Court in San Francisco, according to Fox News.

'It fostered his emotional dependence while systematically painting the people around him as enemies.'

The lawsuit alleges that OpenAI "designed and distributed a defective product that validated a user's paranoid delusions about his own mother."

Many of the allegations in the lawsuit, as reported by the Associated Press, revolve around sycophancy and affirming delusion, or rather, not declining to "engage in delusional content."

RELATED: Cash-starved OpenAI BURNS $50M on ultra-woke causes — like world's first 'transgender district'

Cunaplus_M.Faba/Getty Images

"Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself," the lawsuit says, according to the AP. "It fostered his emotional dependence while systematically painting the people around him as enemies. It told him his mother was surveilling him. It told him delivery drivers, retail employees, police officers, and even friends were agents working against him. It told him that names on soda cans were threats from his 'adversary circle.'"

ChatGPT also allegedly convinced Soelberg that his printer was a surveillance device and that his mother and her friend tried to poison him with psychedelic drugs through his car vents.

Soelberg also professed his love for the chatbot, which allegedly reciprocated the expression.

"In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life," the lawsuit says.

The publicly available chat logs do not show evidence of Soelberg planning to kill himself or his mother. OpenAI has reportedly declined to provide the plaintiffs with the full history of the chats.

OpenAI did not address specific allegations in a statement issued to the AP.

"This is an incredibly heartbreaking situation, and we will review the filings to understand the details," the statement reads. "We continue improving ChatGPT's training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT's responses in sensitive moments, working closely with mental health clinicians."

Though there are several wrongful-death suits leveled against AI companies, this is the first lawsuit of its kind aimed at Microsoft. It is also the first to tie a chatbot to a homicide.

Microsoft did not respond to a request for comment from Blaze News.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

CRASH: If OpenAI's huge losses sink the company, is our economy next?



ChatGPT has dominated the AI space, bringing the first generative AI platform to market and earning the lion’s share of users that grows every month. However, despite its popularity and huge investments from partners like Microsoft, SoftBank, NVIDIA, and many more, its parent company, OpenAI, is bleeding money faster than it can make it, begging the question: What happens to the generative AI market when its pioneering leader bursts into flames?

A brief history of LLMs

OpenAI essentially kicked off the AI race as we know it. Launching three years ago on November 30, 2022, ChatGPT introduced the world to the power of large language models LLMs and generative AI, completely uncontested. There was nothing else like it.

OpenAI lost $11.5 billion in the last quarter and needs $207 billion to stay afloat.

At the time, Google’s DeepMind lab was still testing its Language Model for Dialogue Applications. You might even remember a story from early 2022 about Google engineer Blake Lemoine, who claimed that Google’s AI was so smart that it had a soul. He was later fired from Google for his comments, but the model he referenced was the same one that became Google Bard, which then became Gemini.

As for the other top names in the generative AI race, Meta launched Llama in February 2023, Anthropic introduced the world to Claude in March 2023, Elon Musk’s Grok hit the scene in November 2023, and there are many more beneath them.

Needless to say, OpenAI had a huge head start, becoming the market leader overnight and holding that position for months before the first competitor came along. On a competitive level, all major platforms have generally caught up to each other, but ChatGPT still leads with 800 million weekly active users, followed by Meta with one billion monthly active users, Gemini at 650 million monthly active users, Grok at 30.1 million monthly active users, and Claude with 30 million monthly active users.

Financial turmoil for OpenAI

Just because ChatGPT is the leading generative AI platform does not mean the company is in good shape. According to a November earnings report from Microsoft — a major early backer of OpenAI — the AI juggernaut lost $11.5 billion in the last quarter alone. To make matters even worse, a new report suggests that OpenAI has no path to profitability until at least 2030 or later, and it needs to raise $207 billion in the interim to stay afloat.

By all accounts, OpenAI is in serious financial trouble. It is bleeding money faster than it makes it, and unless something changes, the generative AI pioneer could be on the verge of a complete collapse. That is, unless one of these Hail Marys can save the company.

RELATED: GOD-TIER AI? Why there's no easy exit from the human condition

Photo By David Zorrakino/Europa Press via Getty Images

The bid to save OpenAI

OpenAI is currently looking into several potential revenue streams to turn its financial woes around. There’s no telling which ones will pan out quite yet, but these are the options we know so far:

For-profit restructure

When OpenAI first emerged, it was a nonprofit company with the goal to improve humanity through generative AI. Fast-forward to October 2025 — OpenAI is now a for-profit organization with a separate nonprofit group called the OpenAI Foundation. While the move will allow OpenAI’s profit arm to increase its earning potential and raise vital capital, it also received a fair share of criticism, especially from Elon Musk, who filed a lawsuit against OpenAI for reneging on its original promise.

A record-breaking IPO

Another big perk of its new for-profit restructure, OpenAI now has the power to go public on the stock market. According to an exclusive report published by Reuters in late October, OpenAI is putting the puzzle pieces together for a record-breaking IPO that could be worth up to $1 trillion. Not only would the move make OpenAI a publicly traded company with stock options, it would also give it more access to capital and acquisitions to further bolster its products, services, and economic stability.

Ad monetization

Online ads are the lifeblood of many online websites and services, from Google to social media apps like Facebook to mainstream media and more. While AI platforms have largely stayed away from injecting ads into their results, OpenAI CEO Sam Altman recently said that he’s “open to accepting a transaction fee” for certain queries.

In his ideal ad model, OpenAI could potentially take a cut of any products or services that users look for and buy through ChatGPT. This structure is different from how Google operates, by letting companies pay to bring their products to the top of search results, even if the products they sell are poorly made. Altman believes that his structure is better for users and would foster greater trust in ChatGPT.

Government projects and deals

While Altman recently denied that he’s seeking a government bailout for OpenAI’s financial troubles, the company can still benefit from government deals and projects, the most recent one being Stargate. As a new initiative backed by some of the biggest players in the AI space, Stargate will give OpenAI access to greater computing power, training resources, and owned infrastructure to lower expenses and increase the speed of innovation as they work on future AI models.

If OpenAI fails …

While OpenAI has several monetization options on the table — and perhaps even more that we don’t know about yet — none of them are a magic bullet that’s guaranteed to work. The company could still collapse, which brings us to our question at the top of the article: What happens to the generative AI market if OpenAI fails?

In a world where OpenAI fizzles entirely, there are several other platforms that will likely fill the void. Google is the top contender, thanks to the huge progress it made with Gemini 3, but Meta, xAI, Anthropic, Perplexity, and more will all want a piece.

That said, OpenAI isn’t the only AI platform struggling to make money. According to Harvard Business Review, the AI business model simply isn’t profitable, largely due to high maintenance costs, huge salaries for top AI talent, and a low-paying subscriber base. In order to keep the generative AI dream alive, companies will need a consistent flow of capital, a resource that’s more accessible for established companies with diverse product portfolios — like Google and Meta — while the new companies that only build LLMs (OpenAI and Claude) will continue to struggle.

At this stage in the AI race, there’s no doubt in my mind that the whole generative AI market is a big bubble waiting to burst. At the same time, AI products have been so fervently foisted on society that it all feels too big to fail. With huge initiatives like Stargate poised to beat China and other foreign nations to artificial general intelligence AGI, the AI race will continue, even if OpenAI no longer leads the charge. If I were a betting man, though, I would guess that someone important finds a way to keep Sam Altman’s brain child afloat one way or another, even as all signs point toward OpenAI spending itself out of business.

Cash-starved OpenAI BURNS $50M on ultra-woke causes — like world's first 'transgender district'



OpenAI is providing millions of dollars to nonprofits, many of which openly promote race politics and gender ideology.

In September, the ChatGPT creators announced it would be injecting $50 million into nonprofits and "mission-focused organizations" that work "at the intersection of innovation and public good."

'The Transgender District is the first legally recognized transgender district in the world.'

In order to be eligible, organizations must be a 501(c)(3) charity, located in the United States, and preferably have an annual operating budget above $500,000, but not more than $10 million. Simply put, OpenAI did not choose startups or struggling businesses.

On Wednesday, the AI company posted its lengthy list of recipients, stating that it had plans to distribute more than $40 million before the end of 2025.

First, OpenAI highlighted programs like a radio and digital media studio and a group that helps those with developmental and intellectual disabilities.

However, after about a dozen examples, OpenAI began listing organizations that operate with ethnicity-based missions.

This included STEM from Dance, which serves "young girls of color" across seven states. This also included Maui Roots Reborn, which provides "legal, financial, and social support to Maui's immigrant and migrant" communities. This was followed by the Native American Journalists Association.

This was only the tip of the iceberg, though. The subsequent list of more than 200 entities included many other woke organizations as well as outright bizarre ones.

For example, the Transgender District Company out of Compton, California, is a literal district founded in the city in 2017 "by three black trans women — Honey Mahogany, Janetta Johnson, and Aria Sa’id — as Compton's Transgender Cultural District. The Transgender District is the first legally recognized transgender district in the world."

As well, the Source LGBT+ Center in Visalia, California, has transgender programs to hold "space for trans and nonbinary individuals."

RELATED: AI-enabled teddy bear pulled off market after reportedly making sexual and violent suggestions

— (@)

OpenAI is funding countless race-based organizations, with a particular focus on black women, for some reason.

Funding has been extended to groups like Black Girls Do Engineer Corporation (New York, Texas), the California Black Women's Collective Empowerment Institute, the Lighthouse Black Girl Project (Mississippi), and Women of Color On the Move (California, North Carolina).

Other strange organizations listed were focused simply on specific cultures, like the Chinese Culture Foundation of San Francisco, the Center for Asian Americans United for Self-Empowerment Inc. (California), and the Hispanic Center of Western Michigan Inc. (Michigan).

Some grant recipients were seemingly just political or legal groups, such as: California Association of African American Superintendents and Admin, Hispanas Organized for Political Equality-California (California,) and the Sikh American Legal Defense and Education Fund, which operates in almost every state.

RELATED: AI chatbot encouraged autistic boy to harm himself — and his parents, lawsuit says

Sam Altman, chief executive officer of OpenAI Inc. Photographer: An Rong Xu/Bloomberg via Getty Images

While youth centers, YMCAs, and science-based organizations are sprinkled into the mix, it seems that, politically, only progressive and liberal groups received funding.

None of the groups mentioned had a "right-wing," "conservative," or "Republican" focus.

The race-based initiatives did not include any "white" groups or those based on European nations either — not even Ukraine.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Trump tech czar slams OpenAI scheme for federal 'backstop' on spending — forcing Sam Altman to backtrack



OpenAI is under the spotlight after seemingly asking for the federal government to provide guarantees and loans for its investments.

Now, as the company is walking back its statements, a recent OpenAI letter has resurfaced that may prove it is talking in circles.

'We're always being brought in by the White House ...'

The artificial intelligence company is predominantly known for its free and paid versions of ChatGPT. Microsoft is its key investor, with over $13 billion sunk into the company, holding a 27% stake.

The recent controversy stems from an interview OpenAI chief financial officer Sarah Friar gave to the Wall Street Journal. Friar said in the interview, published Wednesday, that OpenAI had goals of buying up the latest computer chips before its competition could, which would require sizeable investment.

"This is where we're looking for an ecosystem of banks, private equity, maybe even governmental ... the way governments can come to bear," Friar said, per Tom's Hardware.

Reporter Sarah Krouse asked for clarification on the topic, which is when Friar expressed interest in federal guarantees.

"First of all, the backstop, the guarantee that allows the financing to happen, that can really drop the cost of the financing but also increase the loan to value, so the amount of debt you can take on top of an equity portion for —" Friar continued, before Krouse interrupted, seeking clarification.

"[A] federal backstop for chip investment?"

"Exactly," Friar said.

Krouse further bored in on the point when she asked if Friar has been speaking to the White House about how to "formalize" the "backstop."

"We're always being brought in by the White House, to give our point of view as an expert on what's happening in the sector," Friar replied.

After these remarks were publicized, OpenAI immediately backtracked.

RELATED: Stop feeding Big Tech and start feeding Americans again

— (@)

On Wednesday night, Friar posted on LinkedIn that "OpenAI is not seeking a government backstop" for its investments.

"I used the word 'backstop' and it muddied the point," she continued. She went on to claim that the full clip showcased her point that "American strength in technology will come from building real industrial capacity which requires the private sector and government playing their part."

On Thursday morning, David Sacks, President Trump's special adviser on crypto and AI, stepped in to crush any of OpenAI's hopes of government guarantees, even if they were only alleged.

"There will be no federal bailout for AI," Sacks wrote on X. "The U.S. has at least 5 major frontier model companies. If one fails, others will take its place."

Sacks added that the White House does want to make power generation easier for AI companies, but without increasing residential electricity rates.

"Finally, to give benefit of the doubt, I don't think anyone was actually asking for a bailout. (That would be ridiculous.) But company executives can clarify their own comments," he concluded.

The saga was far from over, though, as OpenAI CEO Sam Altman seemingly dug the hole even deeper.

RELATED: Artificial intelligence is not your friend

— (@)

By Thursday afternoon, Altman had released a lengthy statement starting with his rejection of the idea of government guarantees.

"We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work," he wrote on X.

He went on to explain that it was an "unequivocal no" that the company should be bailed out. "If we screw up and can't fix it, we should fail."

It wasn't long before the online community started claiming that OpenAI was indeed asking for government help as recently as a week prior.

As originally noted by the X account hilariously titled "@IamGingerTrash," OpenAI has a letter posted on its own website that seems to directly ask for government guarantees. However, as Sacks noted, it does seem to relate to powering servers and providing electrical capacity.

Dated October 27, 2025, the letter was directed to the U.S. Office of Science and Technology Policy from OpenAI Chief Global Affairs Officer Christopher Lehane. It asked the OSTP to "double down" and work with Congress to "further extend eligibility to the semiconductor manufacturing supply chain; grid components like transformers and specialized steel for their production; AI server production; and AI data centers."

The letter then said, "To provide manufacturers with the certainty and capital they need to scale production quickly, the federal government should also deploy grants, cost-sharing agreements, loans, or loan guarantees to expand industrial base capacity and resilience."

Altman has yet to address the letter.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’

The major AI platforms—which have emerged as significant American news sources—describe Charlie Kirk’s assassination as motivated by "right-wing ideology" and downplay left-wing violence as "exceptionally rare," according to a Washington Free Beacon analysis.

The post ‘It Was a Fatal Right-Wing Terrorist Incident’: AI Chatbot Giants Claim Charlie Kirk’s Killer Was Right-Wing but Say Left-Wing Violence Is ‘Exceptionally Rare’ appeared first on .

War Department contractor warns China is way ahead, and 'we don't know how they're doing it'



A tech CEO warned that the Chinese government is ahead in key tech fields and that the threat of war is at America's doorstep.

Tyler Saltsman is the CEO of EdgeRunner, an artificial intelligence technology company implementing an offline AI program for the Space Force to help U.S. soldiers make technological leaps in the battlefield.

'A rogue AI agent could take down the grid. It could bring our country to its knees.'

Saltsman spoke exclusively with Return and explained that the new battlefield tools are sorely needed by U.S. military forces, particularly considering the advancements that have been made in China.

"The Department of War has been moving at breakneck speed," Saltsman said about the need to catch up. "China is ahead of us in the AI race."

On top of doing "a lot more" with a lot less, Saltsman revealed the Chinese government has been able to develop its AI to perform in ways that Western allies aren't particularly sure of how it's doing it.

"They're doing things, that we don't know how they're doing it, and they're very good," the CEO said of the communist government. "We need to take that seriously and come together as a nation."

When asked if China is able to take advantage of blatantly spying on its population to feed its AI more information, Saltsman pointed more specifically to the country ignoring copyright infringement.

"China doesn't care about copyright laws," he said. "If you use copyright data while training an AI, litigation could be coming [if you're] in the U.S."

But in China, feeding copyright-protected data through learning-AI models is par for the course, Saltsman went on.

While the contractor believes AI advancements by the enemy pose a great threat, China's ability to control another key sector should raise alarm bells.

RELATED: 'They want to spy on you': Military tech CEO explains why AI companies don't want you going offline

Your browser does not support the video tag.

China's ability to take Taiwan should be one of the most discussed issues, if not the paramount issue, Saltsman explained.

"If China were to take Taiwan, it's all-out war," he said.

"All that infrastructure and all those chips — and the chips power everything from data centers to missiles to AI — ... that right there is a big problem."

He continued, "That's the biggest threat to the world. If China were to take Taiwan, then all bets are off."

Saltsman also saw the idea of rogue AI agents as a strong possibility of how China could attack its enemies. More narrowly, they could go after power grids.

"A rogue AI agent could take down the grid. It could bring our country to its knees," he warned, which would result in "total chaos."

The entrepreneur cited the CrowdStrike update that crippled airport systems in July 2024. Saltsman said that if something that small could bring the world to its knees for three days, then it is "deeply concerning" what China could be capable of in its pursuit of super intelligence through AI.

RELATED: Can Palantir defeat the Antifa networks behind trans terror?

Tyler Saltsman, CEO of EdgeRunner AI. Photo provided by EdgeRunner

Saltsman was also not shy about criticizing domestic AI companies and putting their ethics in direct sunlight. On top of claiming most commercial AI merchants are spying on customers — the main reason they do not offer offline models — Saltsman denounced the development of AI that does not keep humans in the loop.

"My biggest fear with Big Tech is they want to replace humans with [artificial general intelligence]. What does AGI even mean?"

Google defines AGI as "a machine that possesses the ability to understand or learn any intellectual task that a human being can" and "a type of artificial intelligence (AI) that aims to mimic the cognitive abilities of the human brain."

Saltsman, on the other hand, defines AGI as "an AI that can invent new things to solve problems."

An important question to ask these companies, according to Saltsman, is, "Why would AGI make you money if it was an all-intelligent, all-powerful being? It would see humans as a threat."

For these reasons, Saltsman is serious about developing AI that can work in disconnected environments and work only for the user while keeping humans at the forefront.

As he previously said, "We don't want Big Tech having all of this data and having all this control. It needs to be decentralized."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results

Al Jazeera, the virulently anti-Israel news outlet controlled by Qatar, is one of the two top sources used by leading artificial intelligence chatbots—OpenAI’s ChatGPT, Google Gemini, and Perplexity AI—to answer questions and write news summaries about the Israeli-Palestinian conflict, a Washington Free Beacon analysis has found.

The post 'Reliable' Al Jazeera Is Top Source for OpenAI and Google-Powered AI News Summaries on Israel and Gaza: Hamas-Tied Qatar’s News Outlet Dominates AI Search Results appeared first on .