GOD-TIER AI? Why there's no easy exit from the human condition



Many working in technology are entranced by a story of a god-tier shift that is soon to come. The story is the “fast takeoff” for AI, often involving an “intelligence explosion.” There will be a singular moment, a cliff-edge, when a machine mind, having achieved critical capacities for technical design, begins to implement an improved version of itself. In a short time, perhaps mere hours, it will soar past human control, becoming a nearly omnipotent force, a deus ex machina for which we are, at best, irrelevant scenery.

This is a clean narrative. It is dramatic. It has the terrifying, satisfying shape of an apocalypse.

It is also a pseudo-messianic myth resting on a mistaken understanding of what intelligence is, what technology is, and what the world is.

The world adapts. The apocalypse is deferred. The technology is integrated.

The fantasy of a runaway supermind achieving escape velocity collides with the stubborn, physical, and institutional realities of our lives. This narrative mistakes a scalar for a capacity, ignoring the fact that intelligence is not a context-free number but a situated process, deeply entangled with physical constraints.

The fixation on an instantaneous leap reveals a particular historical amnesia. We are told this new tool will be a singular event. The historical record suggests otherwise.

Major innovations, the ones that truly resculpted civilization, were never events. They were slow, messy, multi-decade diffusions. The printing press did not achieve the propagation of knowledge overnight; its revolutionary power was in the gradual enabling of the secure communication of information, which in turn allowed knowledge to compound. The steam engine unfolded over generations, its deepest impact trailing its invention by decades.

With each novel technology, we have seen a similar cycle of panic: a flare of moral alarm, a set of dire predictions, and then, inevitably, the slow, grinding work of normalization. The world adapts. The apocalypse is deferred. The technology is integrated. There is little reason to believe this time is different, however much the myth insists upon it.

The fantasy of a fast takeoff is conspicuously neat. It is a narrative free of friction, of thermodynamics, of the intractable mess of material existence. Reality, in contrast, has all of these things. A disembodied mind cannot simply will its own improved implementation into being.

RELATED: 'Unprecedented': AI company documents startling discovery after thwarting 'sophisticated' cyberattack

Photo by Arda Kucukkaya/Anadolu via Getty Images

Any improvement, recursive or otherwise, encounters physical limits. Computation is bounded by the speed of light. The required energy is already staggering. Improvements will require hardware that depends on factories, rare minerals, and global supply chains. These things cannot be summoned by code alone. Even when an AI can design a better chip, that design will need to be fabricated. The feedback loop between software insight and physical hardware is constrained by the banal, time-consuming realities of engineering, manufacturing, and logistics.

The intellectual constraints are just as rigid. The notion of an “intelligence explosion” assumes that all problems yield to better reasoning. This is an error. Many hard problems are computationally intractable and provably so. They cannot be solved by superior reasoning; they can only be approximated in ways subject to the limits of energy and time.

Ironically, we already have a system of recursive self-improvement. It is called civilization, employing the cooperative intelligence of humans. Its gains over the centuries have been steady and strikingly gradual, not explosive. Each new advance requires more, not less, effort. When the “low-hanging fruit” is harvested, diminishing returns set in. There is no evidence that AI, however capable, is exempt from this constraint.

Central to the concept of fast takeoff is the erroneous belief that intelligence is a singular, unified thing. Recent AI progress provides contrary evidence. We have not built a singular intelligence; we have built specific, potent tools. AlphaGo achieved superhuman performance in Go, a spectacular leap within its domain, yet its facility did not generalize to medical research. Large language models display great linguistic ability, but they also “hallucinate,” and pushing from one generation to the next requires not a sudden spark of insight, but an enormous effort of data and training.

The likely future is not a monolithic supermind but an AI service providing a network of specialized systems for language, vision, physics, and design. AI will remain a set of tools, managed and combined by human operators.

To frame AI development as a potential catastrophe that suddenly arrives swaps a complex, multi-decade social challenge for a simple, cinematic horror story. It allows us to indulge in the fantasy of an impending technological judgment, rather than engage with the difficult path of development. The real work will be gradual, involving the adaptation of institutions, the shifting of economies, and the management of tools. The god-machine is not coming. The world will remain, as ever, a complex, physical, and stubbornly human affair.

Trump and Elon want TRUTH online. AI feeds on bias. So what's the fix?



The Trump administration has unveiled a broad action plan for AI (America’s AI Action Plan). The general vibe is one of treating AI like a business, aiming to sell the AI stack worldwide and generate a lock-in for American technology. “Winning,” in this context, is primarily economic. The plan also includes the sorely needed idea of modernizing the electrical grid, a growing concern due to rising electricity demands from data centers. While any extra business is welcome in a heavily indebted nation, the section on the political objectivity of AI is both too brief and misunderstands the root cause of political bias in AI and its role in the culture war.

The plan uses the term "objective" and implies that a lack of objectivity is entirely the fault of the developer, for example:

Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.

The fear that AIs might tip the scales of the culture war away from traditional values and toward leftism is real. Try asking ChatGPT, Claude, or even DeepSeek about climate change, where COVID came from, or USAID.

Training data is heavily skewed toward being generated during the 'woke tyranny' era of the internet.

This desire for objectivity of AI may come from a good place, but it fundamentally misconstrues how AIs are built. AI in general and LLMs in particular are a combination of data and algorithms, which further break down into network architecture and training methods. Network architecture is frequently based on stacking transformer or attention layers, though it can be modified with concepts like “mixture of experts.” Training methods are varied and include pre-training, data cleaning, weight initialization, tokenization, and techniques for altering the learning rate. They also include post-training methods, where the base model is modified to conform to a metric other than the accuracy of predicting the next token.

Many have complained that post-training methods like Reinforcement Learning from Human Feedback introduce political bias into models at the cost of accuracy, causing them to avoid controversial topics or spout opinions approved by the companies — opinions usually farther to the left than those of the average user. “Jailbreaking” models to avoid such restrictions was once a common pastime, but it is becoming harder, as corporate safety measures, sometimes as complex as entirely new models, scan both the input to and output from the underlying base model.

As a result of this battle between RLHF and jailbreakers, an idea has emerged that these post-training methods and safety features are how liberal bias gets into the models. The belief is that if we simply removed these, the models would display their true objective nature. Unfortunately for both the Trump administration and the future of America, this is only partially correct. Developers can indeed make a model less objective and more biased in a leftward direction under the guise of safety. However, it is very hard to make models that are more objective.

The problem is data

According to Google AI Mode vs. Traditional Search & Other LLMs, the top domains cited in LLMs are: Reddit (40%), YouTube (26%), Wikipedia (23%), Google (23%), Yelp (21%), Facebook (20%), and Amazon (19%).

This seems to imply a lot of the outside-fact data in AIs comes from Reddit. Spending trillions of dollars to create an “eternal Redditor” isn’t going to cure cancer. At best, it might create a “cure cancer cheerleader” who hypes up every advance and forgets about it two weeks later. One can only do so much in the algorithm layer to counteract the frame of mind of the average Redditor. In this sense, the political slant of LLMs is less due to the biases of developers and corporations (although they do exist) and more due to the biases of the training data, which is heavily skewed toward being generated during the "woke tyranny" era of the internet.

In this way, the AI bias problem is not about removing bias to reveal a magic objective base layer. Rather, it is about creating a human-generated and curated set of true facts that can then be used by LLMs. Using legislation to remove the methods by which left-leaning developers push AIs into their political corner is a great idea, but it is far from sufficient. Getting humans to generate truthful data is extremely important.

The pipeline to create truthful data likely needs at least four steps.

1. Raw data generation of detailed tables and statistics (usually done by agencies or large enterprises).

2. Mathematically informed analysis of this data (usually done by scientists).

3. Distillation of scientific studies for educated non-experts (in theory done by journalists, but in practice rarely done at all).

4. Social distribution via either permanent (wiki) or temporary (X) channels.

This problem of truthful data plus commentary for AI training is a government, philanthropic, and business problem.

RELATED: Threads is now bigger than X, and that’s terrible for free speech

Photo by Lionel BONAVENTURE/AFP/Getty Images

I can imagine an idealized scenario in which all these problems are solved by harmonious action in all three directions. The government can help the first portion by forcing agencies to be more transparent with their data, putting it into both human-readable and computer-friendly formats. That means more CSVs, plain text, and hyperlinks and fewer citations, PDFs, and fancy graphics with hard-to-find data. FBI crime statistics, immigration statistics, breakdowns of government spending, the outputs of government-conducted research, minute-by-minute election data, and GDP statistics are fundamentally pillars of truth and are almost always politically helpful to the broader right.

In an ideal world, the distillation of raw data into causal models would be done by a team of highly paid scientists via a nonprofit or a government contract. This work is too complex to be left to the crowd, and its benefits are too distributed to be easily captured by the market.

The journalistic portion of combining papers into an elite consensus could be done similarly to today: with high-quality, subscription-based magazines. While such businesses can be profitable, for this content to integrate with AI, the AI companies themselves need to properly license the data and share revenue.

The last step seems to be mostly working today, as it would be done by influencers paid via ad revenue shares or similar engagement-based metrics. Creating permanent, rather than disappearing, data (à la Wikipedia) is a time-intensive and thankless task that will likely need paid editors in the future to keep the quality bar high.

Freedom doesn't always boost truth

However, we do not live in an ideal world. The epistemic landscape has vastly improved since Elon Musk's purchase of Twitter. At the very least, truth-seeking accounts don’t have to deal with as much arbitrary censorship. Even other media have made token statements claiming they will censor less, even as some AI “safety” features are ramped up to a much higher setting than social media censorship ever was.

The challenge with X and other media is that tech companies generally favor technocratic solutions over direct payment for pro-social content. There seems to be a widespread belief in a marketplace of ideas: the idea that without censorship (or with only some person’s favorite censorship), truthful ideas will win over false ones. This likely contains an element of truth, but the peculiarities of each algorithm may favor only certain types of truthful content.

“X is the new media” is a commonly spoken refrain. Yet both anonymous and public accounts on X are implicitly burdened with tasks as varied and complex as gathering election data, creating long think pieces, and the consistent repetition of slogans reinforcing a key message. All for a chance of a few Elon bucks. They are doing this while competing with stolen-valor thirst traps from overseas accounts. Obviously, most are not that motivated and stick to pithy and simple content rather than intellectually grounded think pieces. The broader “right” is still needlessly ceding intellectual and data-creation ground to the left, despite occasional victories in defunding anti-civilizational NGOs and taking control of key platforms.

The other issue experienced by data creators across the political spectrum is the reliance on unpaid volunteers. As the economic belt inevitably tightens and productive people have less spare time, the supply of quality free data will worsen. It will also worsen as both platforms and users feel rightful indignation at their data being “stolen” by AI companies making huge profits, thus moving content into gatekept platforms like Discord. While X is unlikely to go back to the “left,” its quality can certainly fall farther.

Even Redditors and Wikipedia contributors provide fairly complex, if generally biased, data that powers the entire AI ecosystem. Also for free. A community of unpaid volunteers working to spread useful information sounds lovely in principle. However, in addition to the decay in quality, these kinds of “business models” are generally very easy to disrupt with minor infusions of outside money, if it just means paying a full-time person to post. If you are not paying to generate politically powerful content, someone else is always happy to.

The other dream of tech companies is to use AI to “re-create” the entirety of the pipeline. We have heard so much drivel about “solving cancer” and “solving science.” While speeding up human progress by automating simple tasks is certainly going to work and is already working, the dream of full replacement will remain a dream, largely because of “model collapse,” the situation where AIs degrade in quality when they are trained on data generated by themselves. Companies occasionally hype up “no data/zero-knowledge/synthetic data” training, but a big example from 10 years ago, “RL from random play,” which worked for chess and Go, went nowhere in games as complex as Starcraft.

So where does truth come from?

This brings us to the recent example of Grokipedia. Perusing it gives one a sense that we have taken a step in the right direction, with an improved ability to summarize key historical events and medical controversies. However, a number of articles are lifted directly from Wikipedia, which risks drawing the wrong lesson. Grokipedia can’t “replace” Wikipedia in the long term because Grok’s own summarization is dependent on it.

Like many of Elon Musk’s ventures, Grokipedia is two steps forward, one step back. The forward steps are a customer-facing Wikipedia that seems to be of higher quality and a good example of AI-generated long-form content that is not mere slop, achieved by automating the tedious, formulaic steps of summarization. The backward step is a lack of understanding of what the ecosystem looks like without Wikipedia. Many of Grokipedia’s articles are lifted directly from Wikipedia, suggesting that if Wikipedia disappears, it will be very hard to keep neutral articles properly updated.

Even the current version suffers from a “chicken and egg” source-of-truth problem. If no AI has the real facts about the COVID vaccine and categorically rejects data about its safety or lack thereof, then Grokipedia will not be accurate on this topic unless a fairly highly paid editor researches and writes the true story. As mentioned, model collapse is likely to result from feeding too much of Grokipedia to Grok itself (and other AIs), leading to degradation of quality and truthfulness. Relying on unpaid volunteers to suggest edits creates a very easy vector for paid NGOs to influence the encyclopedia.

The simple conclusion is that to be good training data for future AIs, the next source of truth must be written by people. If we want to scale this process and employ a number of trustworthy researchers, Grokipedia by itself is very unlikely to make money and will probably forever be a money-losing business. It would likely be both a better business and a better source of truth if, instead of being written by AI to be read by people, it was written by people to be read by AI.

Eventually, the domain of truth needs to be carefully managed, curated, and updated by a legitimate organization that, while not technically part of the government, would be endorsed by it. Perhaps a nonprofit NGO — except good and actually helping humanity. The idea of “the Foundation” or “Antiversity,” is not new, but our over-reliance on AI to do the heavy lifting is. Such an institution, or a series of them, would need to be bootstrapped by people willing to invest in our epistemic future for the very long term.

Google boss compares replacing humans with AI to getting a fridge for the first time



The head of Google's parent company says welcoming artificial intelligence into daily life is akin to buying a refrigerator.

Alphabet's chief executive, Indian-born Sundar Pichai, gave a revealing interview to the BBC this week in which he asked the general population to get on board with automation through AI.

'Our first refrigerator .... radically changed my mom's life.'

The BBC's Faisal Islam, whose parents are from India, asked the Indian-American executive if the purpose of his AI products were to automate human tasks and essentially replace jobs with programming.

Pichai claimed that AI should be welcomed because humans are "overloaded" and "juggling many things."

He then compared using AI to welcoming the technology that a dishwasher or fridge once brought to the average home.

"I remember growing up, you know, when we got our first refrigerator in the home — how much it radically changed my mom's life, right? And so you can view this as automating some, but you know, freed her up to do other things, right?"

Islam fired back, citing the common complaints heard from the middle class who are concerned with job loss in fields like creative design, accounting, and even "journalism too."

"Do you know which jobs are going to be safer?" he posited to Pichai.

RELATED: Here's how to get the most annoying new update off of your iPhone

The Alphabet chief was steadfast in his touting of AI's "extraordinary benefits" that will "create new opportunities."

At the same time, he said the general population will "have to work through societal disruptions" as certain jobs "evolve" and transition.

"People need to adapt," he continued. "Then there would be areas where it will impact some jobs, so society — I mean, we need to be having those conversations. And part of it is, how do you develop this technology responsibly and give society time to adapt as we absorb these technologies?"

Despite branding Google Gemini as a force for good that should be embraced, Pichai strangely admitted at the same time that chatbots are not foolproof by any means.

RELATED: 'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones

- YouTube

"This is why people also use Google search," Pichai said in regard to AI's proclivity to present inaccurate information. "We have other products that are more grounded in providing accurate information."

The 53-year-old told the BBC that it was up to the user to learn how to use AI tools for "what they're good at" and not "blindly trust everything they say."

The answer seems at odds with the wonder of AI he championed throughout the interview, especially when considering his additional commentary about the technology being prone to mistakes.

"We take pride in the amount of work we put in to give us as accurate information as possible, but the current state-of-the-art AI technology is prone to some errors."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Fooled by fake videos? Unsure what to trust? Here's how to to tell what's real.



There’s a term for artificially generated content that permeates online spaces — creators call it AI slop, and when generative AI first emerged back in late 2022, that was true. AI photos and videos used to be painfully, obviously fake. The lighting was off, the physics were unrealistic, people had too many fingers or limbs or odd body proportions, and textures appeared fuzzy or glossy, even in places where it didn’t make sense. They just didn’t look real.

Many of you probably remember the nightmare fuel that was the early video of Will Smith eating spaghetti. It’s terrifying.

This isn’t the case any more. In just two short years, AI videos have become convincingly realistic to the point that deepfakes — content that perfectly mimics real people, places, and events — are now running rampant. For just one quick example of how far AI videos have come, check out Will Smith eating spaghetti, then and now.

None of it is real unless it is verifiable, and that is becoming increasingly hard to do.

Even the Trump administration recently rallied around AI-generated content, using it as a political tool to poke fun at the left and its policies. The latest entry portrayed AI Hakeem Jeffries wearing a sombrero while standing beside a miffed Chuck Schumer who is speaking a little more honestly than usual, a telltale sign that the video is fake.

While some AI-generated videos on the internet are simple memes posted in good fun, there is a darker side to AI content that makes the internet an increasingly unreliable place for truth, facts, and reality.

How to tell if an online video is fake

AI videos in 2025 are more convincing than ever. Not only do most AI video platforms pass the spaghetti-eating Turing test, but they have also solved many of the issues that used to run rampant (too many fingers, weird physics, etc.). The good news is that there are still a few ways to tell an AI video from a real one.

At least for now.

First, most videos created with OpenAI Sora, Grok Imagine, and Gemini Veo have clear watermarks stamped directly on the content. I emphasize “most,” because last month, violent Sora-generated videos cropped up online that didn’t have a watermark, suggesting that either the marks were manually removed or there’s a bug in Sora’s platform.

Your second-best defense against AI-generated content is your gut. We’re still early enough in the AI video race that many of them still look “off.” They have a strange filter-like sheen to them that’s reminiscent of watching content in a dream. Natural facial expressions and voice inflections continue to be a problem. AI videos also still have trouble with tedious or more complex physics (especially fluid motions) and special effects (explosions, crashing waves, etc.).

RELATED: Here's how to get the most annoying new update off of your iPhone

Photo by: Nano Calvo/VW Pics/Universal Images Group via Getty Images

At the same time, other videos, like this clip of Neil deGrasse Tyson, are shockingly realistic. Even the finer details are nearly perfect, from the background in Tyson’s office to his mannerisms and speech patterns — all of it feels authentic.

Now watch the video again. Look closely at what happens after Tyson reveals the truth. It’s clear that the first half of the video is fake, but it’s harder to tell if the second half is actually real. A notable red flag is the way the video floats on top of his phone as he pulls it away from the camera. That could just be a simple editing trick, or it could be a sign that the entire thing is a deepfake. The problem is that there’s no way to know for sure.

Why deepfakes are so dangerous

Deepfakes pose a real problem to society, and no one is ready for the aftermath. According to Statista, U.S. adults spend more than 60% of their daily screen time watching video content. If the content they consume isn’t real, this can greatly impact their perception of real-world events, warp their expectations around life, love, and happiness, facilitate political deception, chip away at mental health, and more.

Truth only exists if the content we see is real. False fabrications can easily twist facts, spread lies, and sow doubt, all of which will destabilize social media, discredit the internet at large, and upend society overall.

Deepfakes, however, are real, at least in the sense that they exist. Even worse, they are becoming more prevalent, and they are outright dangerous. They are a threat because they are extremely convincing and almost impossible to discern from reality. Not only can a deepfake be used to show a prominent figure (politicians, celebrities, etc.) doing or saying bad things that didn’t actually happen, but deepfakes can also be used as an excuse to cover up something a person actually did on film. The damage goes both ways, obfuscating the truth, ruining reputations, and cultivating chaos.

Soon, videos like the Neil deGrasse Tyson clip will become the norm, and the consequences will be utterly dire. You’ll see presidents declare war on other countries without uttering a real word. Foreign nations will drop bombs on their opponents without firing a shot, and terrorists will commit atrocities on innocent people that don’t exist. All of it is coming, and even though none of it will be real, we won’t be able to tell the difference between truth and lies. The internet — possibly even the world — will descend into turmoil.

Don’t believe everything you see online

Okay, so the internet has never been a bastion of truth. Since the dawn of dial-up, different forms of deception have crept throughout, bending facts or outright distorting the truth wholesale. This time, it’s a little different. Generative AI doesn’t just twist narratives to align with an agenda. It outright creates them, mimicking real life so convincingly that we’re compelled to believe what we see.

From here on out, it’s safe to assume that nothing on the internet is real — not politicians spewing nonsense, not war propaganda from some far-flung country, not even the adorable animal videos on your Facebook feed (sorry, Grandma!). None of it is real unless it is verifiable, and that is becoming increasingly hard to do in the age of generative AI. The open internet we knew is dead. The only thing you can trust today is what you see in person with your own eyes and the stories published by trusted sources online. Take everything else with a heaping handful of salt.

This is why reputable news outlets will be even more important in the AI future. If anyone can be trusted to publish real, authentic, truthful content, it should be our media. As for who in the press is telling the truth, Glenn Beck’s “liar, liar” test is a good place to start.

US NEXT? Sightings of humanoid robots spike on the streets of Moscow



Delivery robots have been promoted in Moscow since around 2019, through Russia's version of Uber Eats.

The Yandex.Eats app from tech giant and search engine company Yandex released a citywide fleet of 20 robots across the city that year.

'Yandex plans to release around 1,300 robots per month by the end of 2027.'

By 2023, Yandex added another 50 robots from its third-generation production line, touting a delivery proficiency rating of 87% of orders delivered between eight and 12 minutes.

"About 15 delivery robots are enough to deliver food and groceries in a residential area with a population of 5,000 people," Yandex said at the time, per RT.

However, what started as a few rectangular robots wheeling through the streets has seemingly spiraled into what will become thousands of bots, including both harmless-looking buggies and, perhaps more frightening, bipedal bots.

The news comes as sightings of humanoid robots in Russia are increasing.

RELATED: Cybernetics promised a merger of human and computer. Then why do we feel so out of the loop?

According to TAdvisor, Yandex plans to release around 1,300 robots per month by the end of 2027, for a whopping total of approximately 20,000 machines. The goal is to have a massive fleet of bots for deliveries, as well as supply couriers to other companies, while reducing the cost of shipping.

At the same time, Yandex also announced development of humanoid robots. Videos have recently popped up of a smaller bot walking alongside a delivery bot in 2024, but it is hard to tell if it was real or a human in costume.

RT recently shared a video of a seemingly real bipedal bot running through the streets of Moscow with a delivery on its back. The bot also took time to dance with an old man, for some reason.

However, it is hard to believe that any Russian autonomous bots are ready for mass production given the recent demo showcased at a technology event in Moscow.

RELATED: 'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones

Aldol, a robot developed by a company of the same name, was described as Russia's first anthropomorphic bot powered by AI.

Last week, the robot was brought on stage and took a few shaky steps while waving to the audience before tumbling robo-face-first onto the floor. Two presenters dragged the robot off stage as if they were rescuing a wounded comrade, while at the same time a third member of the team struggled to put a curtain back into place to hide the debacle.

Still, Yandex is hoping it can expand its robots into fields like medicine, while simultaneously perfecting the use of its delivery bots. The company plans to have a robot at each point of contact before a delivery gets to the human recipient.

The plan, to be showcased at the company's own offices, is to have an automated process in which a humanoid robot picks up an order and packs it onto a wheeled delivery bot. Then, the wheeled bot takes the order to another humanoid bot on the receiving end, which then delivers it to the customer.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Bluesky founder reboots Vine for AI-free social media — as human-only video becomes 'nostalgic'



Jack Dorsey is bringing the nostalgia back, just a few seconds at a time.

Dorsey co-founded Twitter in 2007 and served as its inaugural CEO for a year until returning to the position for a six-year stint in its seemingly darkest years between 2015 and 2021.

Now, through his nonprofit called and Other Stuff, Dorsey is bringing one of the internet's most beloved applications back from the dead.

'Can we do something that takes us back, that lets us see those old things ...'

"So basically, I'm like, can we do something that’s kind of nostalgic?" said Evan Henshaw-Plath, Dorsey's pick to spearhead the revival. The New Zealander comes from Dorsey's nonprofit team, where he is known as Rabble, and has outlined aspirations to bring the internet vibe back to its Web 2.0 time period — roughly 2004-2010 or thereabouts.

Dorsey and Henshaw-Plath are rebooting Vine, the six-second video app that predominantly served viewers short, user-generated comedy clips. The format is a clear inspiration for modern apps like TikTok and formats like YouTube's Shorts and Instagram's Reels.

Dorsey and company are focused on keeping the nostalgic feel, however, and unlike the other apps, will keep a six-second time limit while also taking a stance on content. What that means, according to Yahoo, is that the platform will reject AI-generated videos using special filters meant to prevent them from being posted.

RELATED: Social media matrix destroys free will; Dorsey admits ‘we are being programmed’

i loved vine. i found it pre-launch, pushed the company to buy it (i wasn’t ceo at the time), and they did great. but over time https://t.co/HNsCMGtS04 (tiktok) took off, and and the founders left, leaving vine directionless. when i came back as ceo we decided to shut it down…
— jack (@jack) April 11, 2024

The new app, called diVine, will revive 10,000 archived Vine posts, after the new team was able to extract a "good percentage" of some of the most popular videos.

Former Vine users are able to claim their old videos, so long as they can prove access to previously connected social media accounts that were on their former Vine profiles. Alternatively, the users can request that their old videos be taken down.

"The reason I funded the nonprofit and Other Stuff is to allow creative engineers like Rabble to show what's possible in this new world," Dorsey said, per Yahoo.

This will be done by "using permissionless protocols which can't be shut down based on the whim of a corporate owner," he added.

Henshaw-Plath commented on returning to simpler internet times — as silly as it sounds — when a person's content feed only consisted of accounts he follows, with real, user-generated content.

"Can we do something that takes us back, that lets us see those old things, but also lets us see an era of social media where you could either have control of your algorithms, or you could choose who you follow, and it's just your feed, and where you know that it's a real person that recorded the video?" he asked.

RELATED: Twitter announces the demise of video-sharing app Vine, internet weeps (2016)

According to Tech Crunch, Vine was acquired by Twitter in 2012 for $30 million before eventually shutting down in 2016.

The app sparked careers for personalities like Logan Paul, Andrew “King Bach” Bachelor, and John Richard Whitfield, aka DC Young Fly. Bachelor and Whitfield captured the genre that was most popular on the platform: eccentric young performers who published unique comedy.

DiVine is currently in a beta stage and is available only to existing users of the messenger app Nostr.

X owner Elon Musk announced in August that he was trying acquire access to Vine's archive so that users could post the videos on his platform.

"We recently found the Vine video archive (thought it had been deleted) and are working on restoring user access, so you can post them if you want," Musk wrote.

However, it seems the billionaire may have been beaten to the punch by longtime rival Dorsey.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'You're robbing me': Morgan Freeman slams Tilly Norwood, AI voice clones



The use of celebrity likeness for AI videos is spiraling out of control, and one of Hollywood's biggest stars is not having it.

Despite the use of AI in online videos being fairly new, it has already become a trope to use an artificial version of a celebrity's voice for content relating to news, violence, or history.

'I don't appreciate it, and I get paid for doing stuff like that.'

This is particularly true when it comes to satirical videos that are meant to sound like documentaries. Creators love to use recognizable voices, like David Attenborough's and, of course, Morgan Freeman's, whose voice has become so recognizable that others have labeled him as "the voice of God."

However, the 88-year-old Freeman is not pleased about his voice being replicated. In an interview with the Guardian, he said that while some actors like James Earl Jones (who played Darth Vader) have consented to his voice being imitated with computers, he has not.

"I'm a little PO'd, you know," Freeman told the outlet. "I'm like any other actor: Don't mimic me with falseness. I don't appreciate it, and I get paid for doing stuff like that, so if you're gonna do it without me, you're robbing me."

Freeman explained that his lawyers have been "very, very busy" in pursuing "many ... quite a few" cases in which his voice was replicated without his consent.

In the same interview, the Memphis native was also not shy about criticizing the concept of AI actors.

RELATED: Hollywood’s newest star isn’t human — and why that’s ‘disturbing’

Photo by Chris Haston/WBTV via Getty Images

Freeman was asked about Tilly Norwood, the AI character introduced by Dutch actress Eline Van der Velden in 2025. The pretend-world character is meant to be an avatar mimicking celebrity status, while also cutting costs in the casting room.

"Nobody likes her because she's not real and that takes the part of a real person," Freeman jabbed. "So it's not going to work out very well in the movies or in television. ... The union's job is to keep actors acting, so there's going to be that conflict."

Freeman spoke out about the use of his voice in 2024, as well. According to a report by 4 News Now, a TikTok creator posted a video claiming to be Freeman's niece and used an artificial version of his voice to narrate the video.

In response, Freeman wrote on X, "Thank you to my incredible fans for your vigilance and support in calling out the unauthorized use of an A.I. voice imitating me."

He added, "Your dedication helps authenticity and integrity remain paramount. Grateful."

RELATED: Meet AI 'actress' Tilly Norwood. Is she really the future of entertainment?

Norwood is not the first attempt at taking an avatar mainstream. In 2022, Capitol Records flirted with an AI rapper named FN Meka; the very idea that the rapper was even signed to a label was historic in the first place.

The rapper, or more likely its representatives, were later dropped from the label after activists claimed the character reinforced racial stereotypes.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

'Unprecedented': AI company documents startling discovery after thwarting 'sophisticated' cyberattack



In the middle of September, AI company and Claude developer Anthropic discovered "suspicious activity" while monitoring real-world cyberattacks that used artificial intelligence agents. Upon further investigation, however, the company came to realize that this activity was in fact a "highly sophisticated espionage campaign" and a watershed moment in cybersecurity.

AI agents weren't just providing advice to the hackers, as expected.

'The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms.'

Anthropic's Thursday report said the AI agents were executing the cyberattacks themselves, adding that it believed that this is the "first documented case of a large-scale cyberattack executed without substantial human intervention."

RELATED: Coca-Cola doubles down on AI ads, still won't say 'Christmas'

Photo by Samuel Boivin/NurPhoto via Getty Images

The company's investigation showed that the hackers, whom the report "assess[ed] with high confidence" to be a "Chinese-sponsored group" manipulated the AI agent Claude Code to run the cyberattack.

The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.

The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.

In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.

While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. "This AI hallucination in offensive security contexts presented challenges for the actor's operational effectiveness, requiring careful validation of all claimed results," the analysis explained.

Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.

The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.

Interestingly, Anthropic said the attackers were able to trick Claude through sustained "social engineering" during the initial stages of the attack: "The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing."

The report also responded to a question that is likely on many people's minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?

In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.

RUBBLE: Stunning bridge collapse reveals Chinese weakness



A recent bridge collapse in China was an obvious design failure, according to experts.

The Hongqi Bridge in China's southwestern province of Sichuan crumbled on Tuesday, just months after opening; it linked a national highway with Tibet.

'Efforts should have focused on slope management.'

According to Reuters, police in the city of Maerkang had closed the bridge to all traffic the day before when significant cracking appeared on nearby slopes, which eventually led to landslide.

The 758-meter-long bridge collapse was captured on video and caught the eye of several experts, including Casey Jones, a geotechnical engineer with over 35 years under his belt while being licensed in six states.

"I could just tell you almost certainly that this is a design failure," Jones stated in a review of the incident.

The design likely did not account for the orientation of the bedding planes for the underlying rock, Jones said, adding, "Whether they had planned to do any stabilization efforts like rock bolts and that sort of thing is not clear at this point."

Slope orientation and the surrounding environment was the main cause of concern for most who showed knowledge on the subject, which included an alleged bridge expert cited by Chinese state media. In careful comments, the expert did not offer any words of praise either.

RELATED: What a Westerner sees in China: What you need to know

"If the Hongqi Bridge route is the optimal one, then efforts should have focused on slope management," the expert told Jimu News, a Chinese state-owned outlet, per the Straits Times.

The expert said that typically a geological survey should be done to select the proper sites for bridge construction and measure if an area is prone to landslides. Bridge sites must avoid these types of potential environmental hazards, the expert reportedly added.

Christopher Blume, who posted a viral take on the bridge collapse citing his experience with failing Chinese infrastructure, told Return that he was a professor at Peking University in Beijing for nine years, having moved to China due to his wife's architecture work.

"I think in general the idea of Chinese infrastructure being poor quality is true," Blume explained.

As simple examples, he pointed to a lack of p-traps in bathroom plumbing, which act as a water barrier against harmful and odorous gases coming back up through the pipes. Apartments are also poorly constructed, he claimed, with his own having near half-inch gaps around the windows; his wife fixed this with duct tape, he said.

RELATED: This city bought 300 Chinese electric buses — then found out China can turn them off at will

Hongqi Bridge over the Songhua River is under construction on April 1, 2024, in Jilin City, Jilin Province of China. (Photo by Zhang Jingfeng/VCG via Getty Images)

"In China, price is everything. So whether it is corruption, cutting corners on safety, etc., you name it, the ethos was always build it, and don't worry about the details," Blume continued. "Yeah, the bridge was obviously built in an area with landslide risk, but a) if that's the case, it should never have been built there with such obvious landslide risk, and b) it clearly was not built to deal with any serious natural disaster risk."

Colloquially, many more blamed the bridge collapse on a lack of fortification and neglect when it comes to the placement of abutments.

Other X users pointed to possible serious flaws in structural integrity.

According to Blume, a lack of skilled tradesmen is a common issue, with safety violations rampant as workers are pulled in from the countryside.

According to China, though, there is an alleged abundance of skilled labor. As reported by Xinhua News, the country boasts more than 200 million skilled workers as of May 2024, which includes "over 60 million highly skilled professionals."

The World Economic Forum stated in 2021 that high-skilled personnel are defined as being capable of "performing complicated tasks" while being able to "adapt quickly to technology changes."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

1980s-inspired AI companion promises to watch and interrupt you: 'You can see me? That's so cool'



A tech entrepreneur is hoping casual AI users and businesses alike are looking for a new pal.

In this case, "PAL" is a floating term that can mean either a complimentary video companion or a replacement for a human customer service worker.

'I love the print on your shirt; you're looking sharp today.'

Tech company Tavus calls PALs "the first AI built to feel like real humans."

Overall, Tavus' messaging is seemingly directed toward both those seeking an artificial friend and those looking to streamline their workforce.

As a friend, the avatar will allegedly "reach out first" and contact the user by text or video call. It can allegedly anticipate "what matters" and step in "when you need them the most."

In an X post, founder Hassaan Raza spoke about PALs being emotionally intelligent and capable of "understanding and perceiving."

The AI bots are meant to "see, hear, reason," and "look like us," he wrote, further cementing the use of the technology as companion-worthy

"PALs can see us, understand our tone, emotion, and intent, and communicate in ways that feel more human," Raza added.

In a promotional video for the product, the company showcased basic interactions between a user and the AI buddy.

RELATED: Mother admits she prefers AI over her DAUGHTER

A woman is shown greeting the "digital twin" of Raza, as he appears as a lifelike AI PAL on her laptop.

Raza's AI responds, "Hey, Jessica. ... I'm powered by the world's fastest conversational AI. I can speak to you and see and hear you."

Excited by the notion, Jessica responds, "Wait, you can see me? That's so cool."

The woman then immediately seeks superficial validation from the artificial person.

"What do you think of my new shirt?" she asks.

The AI lives up to the trope that chatbots are largely agreeable no matter the subject matter and says, "I love the print on your shirt; you're looking sharp today."

After the pleasantries are over, Raza's AI goes into promo mode and boasts about its ability to use "rolling vision, voice detection, and interruptibility" to seem more lifelike for the user.

The video soon shifts to messaging about corporate integration meant to replace low-wage employees.

Describing the "digital twins" or AI agents, Raza explains that the AI program is an opportunity to monetize celebrity likeness or replace sales agents or customer support personnel. He claims the avatars could also be used in corporate training modules.

RELATED: Can these new fake pets save humanity? Take a wild guess

The interface of the future is human.

We’ve raised a $40M Series B from CRV, Scale, Sequoia, and YC to teach machines the art of being human, so that using a computer feels like talking to a friend or a coworker.

And today, I’m excited for y’all to meet the PALs: a new… pic.twitter.com/DUJkEu5X48
— Hassaan Raza (@hassaanrza) November 12, 2025

In his X post, Raza also attempted to flex his acting chops by creating a 200-second film about a man/PAL named Charlie who is trapped in a computer in the 1980s.

Raza revives the computer after it spent 40 years on the shelf, finding Charlie still trapped inside. In an attempt at comedy, Charlie asks Raza if flying cars or jetpacks exist yet. Raza responds, "We have Salesforce."

The founder goes on to explain that PALs will "evolve" with the user, remembering preferences and needs. While these features are presented as groundbreaking, the PAL essentially amounts to being an AI face attached to an ongoing chatbot conversation.

AI users know that modern chatbots like Grok or ChatGPT are fully capable of remembering previous discussions and building upon what they have already learned. What's seemingly new here is the AI being granted app permissions to contact the user and further infiltrate personal space.

Whether that annoys the user or is exactly what the person needs or wants is up for interpretation.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!