Country music's MOST popular song is AI-generated



The number one country song in America isn’t sung by a human. Instead it was generated entirely by AI — which may have devastating implications for music, creativity, and the very definition of humanity.

The song “Walk My Walk” is by AI artist Breaking Rust and features lyrics like, “Every scar’s a story that I survived / I’ve been through hell, but I’m still alive.”

“They say slow down, boy, don’t go too fast / But I ain’t never been one to live in the past,” croons the AI artist.

“If you look at some of the lyrics of this song, I mean it talks about how he’s been dragged through the mud. He’s, you know, had to really stand. I mean, it doesn’t know any of this stuff. None of it is real. And yet it is assembling it in a way that is so appealing, it’s number one on the Billboard country music chart,” Blaze Media co-founder Glenn Beck says on “The Glenn Beck Program.”


“The whole world is about to change,” he continues. “You know, I just heard Elon Musk say that in five years, there’s not going to be phones or apps. It will just be some sort of a box or device that you kind of carry around with you and it’s listening. It’s anticipating. It’s AI.”

“And it will know what you want to hear, what you want, and it will create the music you want to hear. It will create the podcast you want to hear. It will do all this stuff for you. So we will be in our own universe even more than we are right now,” he adds.

This has led Glenn to ask some serious introspective questions like, “If AI can fake being a human and sing soulfully while not having a soul, what does it mean to be human?”

“I think a lot of people won’t care,” BlazeTV host Stu Burguiere chimes in. “Like, people won’t care if it is made by humans or not if they like it. And they seem to like it.”

While both Glenn and Stu agree AI will likely take over the arts, Glenn believes that “handmade is going to come back into style at some point.”

“Human-made will come back into style,” he says. “But ... we’re going to go through a period where it’s going to get really scary.”

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Elon Musk to reveal flying car next year



Elon Musk says the next Tesla Roadster might fly. Not figuratively — literally.

Imagine an all-electric supercar that hits 60 mph in under two seconds, then lifts off the pavement like something out of "The Jetsons." It sounds impossible, even absurd. But during a recent appearance on "The Joe Rogan Experience," Musk hinted that the long-delayed Tesla Roadster is about to do the unthinkable: merge supercar speed with vertical takeoff.

If the April 2026 demo delivers even a glimpse of flight, it will cement Tesla’s image as the company that still dares to dream big.

As someone who has test-driven nearly every kind of machine on four (and sometimes fewer) wheels, I’ve seen hype before. But this time, it’s not just marketing spin. Tesla is preparing a prototype demo that could change how we think about personal transportation — or prove that even Elon Musk can aim too high.

Rogan reveal

On Halloween, Musk told Joe Rogan that Tesla is “getting close to demonstrating the prototype,” adding with his usual flair: “One thing I can guarantee is that this product demo will be unforgettable.”

Rogan, always the skeptic, pushed for details. Wings? Hovering? Musk smirked: “I can’t do the unveil before the unveil. But I think it has a shot at being the most memorable product unveil ever.”

He even invoked his friend and PayPal co-founder Peter Thiel, who once said, "We wanted flying cars; instead we got 140 characters."

Musk’s response: “I think if Peter wants a flying car, he should be able to buy one.”

That’s classic Elon — part visionary, part showman. But underneath the bravado lies serious engineering. Musk hinted at SpaceX technology powering the car.

The demonstration, now scheduled for April 1, 2026 (yes, April Fools’ Day), is meant to prove the impossible. Production could start by 2027 or 2028, but given Tesla’s history of optimistic timelines, it may be longer before any of us see a flying Roadster on the road — or in the air.

Good timing

Tesla’s timing isn’t accidental. The company’s Q3 2025 profits fell short due to tariffs, R&D spending, and the loss of federal EV tax credits. With electric vehicle demand cooling, Musk knows how to recapture attention: promise something audacious.

Remember the Cybertruck’s “unbreakable” windows? The demo didn’t go as planned — but it worked as a publicity move. A flying Tesla Roadster could do the same, turning investor eyes (and wallets) back toward Tesla’s most thrilling frontier.

Hovering hype

So can a Tesla actually fly? It may use cold-gas thrusters — essentially small rocket nozzles that expel compressed air for brief, powerful thrusts. The result could be hovering, extreme acceleration, or even short hops over obstacles.

There’s also talk of “fan car” technology, inspired by 1970s race cars that used vacuum fans to suck the car to the track for impossible cornering speeds. Combine that with Tesla’s AI-driven Full Self-Driving systems and new battery packs designed for over 600 miles of range, and the idea starts to sound just plausible enough.

The challenge? Energy density. Vertical flight consumes enormous power, and even Tesla’s advanced 4680 cells may struggle to deliver it without sacrificing range. And if the Roadster truly hovers, it will need reinforced suspension, stability controls, and noise-dampening tech to keep your driveway from turning into a launchpad.

Sky's the limit

Musk isn’t the first to chase this dream. The “flying car” has tempted inventors since the 1910s — and disappointed them nearly as long.

In the optimistic 1950s, Ford’s Advanced Design Studio built the Volante Tri-Athodyne, a ducted-fan prototype that looked ready for takeoff but never left the ground. The Moulton Taylor Aerocar actually flew, cruising at 120 mph and folding its wings for the highway — but only five were ever built.

Even the military tried. The U.S. and Canadian armies funded the Avrocar, a flying saucer-style VTOL craft that could hover but not climb more than six feet. Every generation since has produced new attempts — from the AVE Mizar (a flying Ford Pinto that ended in tragedy) to today’s eVTOL startups like Joby and Alef Aeronautics, the latter already FAA-certified for testing.

The dream keeps coming back because it represents freedom — freedom from traffic, limits, and gravity itself.

Got a permit for that?

Here’s where reality checks in. The Federal Aviation Administration now classifies electric vertical takeoff and landing aircraft under a new category requiring both airplane and helicopter training. You would need a pilot’s license, medical exams, and specialized instruction to legally take off.

Insurance? Astronomical. Airspace? Restricted. Maintenance? Complex. In short: This won’t replace your daily driver any time soon. Even if the Roadster hovers, the FAA isn’t handing out flight permits for your morning commute.

RELATED: You can now buy a real-life Jetsons vehicle for the same price as a luxury car

Image provided to Blaze News by Jetson

Free parachute with purchase

Flying cars sound thrilling until you consider what happens when one malfunctions. A blown tire is one thing; a blown thruster at 200 feet is another. Tesla’s autonomy might help mitigate pilot error, but weather, visibility, and battery reliability all pose major challenges.

NASA and the FAA are developing new air traffic systems to handle “urban air mobility,” but even best-case scenarios involve strict flight corridors, automated control, and years of testing.

In short: We’re closer than ever to a flying car — but not that close.

Sticking the landing

So will the Tesla Roadster really fly? Probably — at least for a few seconds. Will it transform personal transportation? Not yet.

But here’s the thing: Musk doesn’t have to deliver a mass-market flying car. He just has to prove that it’s possible. And that may be enough to reignite public imagination and investor faith at a time when both are fading for the EV industry.

If the April 2026 demo delivers even a glimpse of flight, it will cement Tesla’s image as the company that still dares to dream big. If it flops, it will join the long list of “flying car” fantasies that fell back to Earth.

Either way, we’ll be watching — because when Elon Musk says he’s going to make a car fly, the world can’t help but look up.

Trump and Elon want TRUTH online. AI feeds on bias. So what's the fix?



The Trump administration has unveiled a broad action plan for AI (America’s AI Action Plan). The general vibe is one of treating AI like a business, aiming to sell the AI stack worldwide and generate a lock-in for American technology. “Winning,” in this context, is primarily economic. The plan also includes the sorely needed idea of modernizing the electrical grid, a growing concern due to rising electricity demands from data centers. While any extra business is welcome in a heavily indebted nation, the section on the political objectivity of AI is both too brief and misunderstands the root cause of political bias in AI and its role in the culture war.

The plan uses the term "objective" and implies that a lack of objectivity is entirely the fault of the developer, for example:

Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.

The fear that AIs might tip the scales of the culture war away from traditional values and toward leftism is real. Try asking ChatGPT, Claude, or even DeepSeek about climate change, where COVID came from, or USAID.

Training data is heavily skewed toward being generated during the 'woke tyranny' era of the internet.

This desire for objectivity of AI may come from a good place, but it fundamentally misconstrues how AIs are built. AI in general and LLMs in particular are a combination of data and algorithms, which further break down into network architecture and training methods. Network architecture is frequently based on stacking transformer or attention layers, though it can be modified with concepts like “mixture of experts.” Training methods are varied and include pre-training, data cleaning, weight initialization, tokenization, and techniques for altering the learning rate. They also include post-training methods, where the base model is modified to conform to a metric other than the accuracy of predicting the next token.

Many have complained that post-training methods like Reinforcement Learning from Human Feedback introduce political bias into models at the cost of accuracy, causing them to avoid controversial topics or spout opinions approved by the companies — opinions usually farther to the left than those of the average user. “Jailbreaking” models to avoid such restrictions was once a common pastime, but it is becoming harder, as corporate safety measures, sometimes as complex as entirely new models, scan both the input to and output from the underlying base model.

As a result of this battle between RLHF and jailbreakers, an idea has emerged that these post-training methods and safety features are how liberal bias gets into the models. The belief is that if we simply removed these, the models would display their true objective nature. Unfortunately for both the Trump administration and the future of America, this is only partially correct. Developers can indeed make a model less objective and more biased in a leftward direction under the guise of safety. However, it is very hard to make models that are more objective.

The problem is data

According to Google AI Mode vs. Traditional Search & Other LLMs, the top domains cited in LLMs are: Reddit (40%), YouTube (26%), Wikipedia (23%), Google (23%), Yelp (21%), Facebook (20%), and Amazon (19%).

This seems to imply a lot of the outside-fact data in AIs comes from Reddit. Spending trillions of dollars to create an “eternal Redditor” isn’t going to cure cancer. At best, it might create a “cure cancer cheerleader” who hypes up every advance and forgets about it two weeks later. One can only do so much in the algorithm layer to counteract the frame of mind of the average Redditor. In this sense, the political slant of LLMs is less due to the biases of developers and corporations (although they do exist) and more due to the biases of the training data, which is heavily skewed toward being generated during the "woke tyranny" era of the internet.

In this way, the AI bias problem is not about removing bias to reveal a magic objective base layer. Rather, it is about creating a human-generated and curated set of true facts that can then be used by LLMs. Using legislation to remove the methods by which left-leaning developers push AIs into their political corner is a great idea, but it is far from sufficient. Getting humans to generate truthful data is extremely important.

The pipeline to create truthful data likely needs at least four steps.

1. Raw data generation of detailed tables and statistics (usually done by agencies or large enterprises).

2. Mathematically informed analysis of this data (usually done by scientists).

3. Distillation of scientific studies for educated non-experts (in theory done by journalists, but in practice rarely done at all).

4. Social distribution via either permanent (wiki) or temporary (X) channels.

This problem of truthful data plus commentary for AI training is a government, philanthropic, and business problem.

RELATED: Threads is now bigger than X, and that’s terrible for free speech

Photo by Lionel BONAVENTURE/AFP/Getty Images

I can imagine an idealized scenario in which all these problems are solved by harmonious action in all three directions. The government can help the first portion by forcing agencies to be more transparent with their data, putting it into both human-readable and computer-friendly formats. That means more CSVs, plain text, and hyperlinks and fewer citations, PDFs, and fancy graphics with hard-to-find data. FBI crime statistics, immigration statistics, breakdowns of government spending, the outputs of government-conducted research, minute-by-minute election data, and GDP statistics are fundamentally pillars of truth and are almost always politically helpful to the broader right.

In an ideal world, the distillation of raw data into causal models would be done by a team of highly paid scientists via a nonprofit or a government contract. This work is too complex to be left to the crowd, and its benefits are too distributed to be easily captured by the market.

The journalistic portion of combining papers into an elite consensus could be done similarly to today: with high-quality, subscription-based magazines. While such businesses can be profitable, for this content to integrate with AI, the AI companies themselves need to properly license the data and share revenue.

The last step seems to be mostly working today, as it would be done by influencers paid via ad revenue shares or similar engagement-based metrics. Creating permanent, rather than disappearing, data (à la Wikipedia) is a time-intensive and thankless task that will likely need paid editors in the future to keep the quality bar high.

Freedom doesn't always boost truth

However, we do not live in an ideal world. The epistemic landscape has vastly improved since Elon Musk's purchase of Twitter. At the very least, truth-seeking accounts don’t have to deal with as much arbitrary censorship. Even other media have made token statements claiming they will censor less, even as some AI “safety” features are ramped up to a much higher setting than social media censorship ever was.

The challenge with X and other media is that tech companies generally favor technocratic solutions over direct payment for pro-social content. There seems to be a widespread belief in a marketplace of ideas: the idea that without censorship (or with only some person’s favorite censorship), truthful ideas will win over false ones. This likely contains an element of truth, but the peculiarities of each algorithm may favor only certain types of truthful content.

“X is the new media” is a commonly spoken refrain. Yet both anonymous and public accounts on X are implicitly burdened with tasks as varied and complex as gathering election data, creating long think pieces, and the consistent repetition of slogans reinforcing a key message. All for a chance of a few Elon bucks. They are doing this while competing with stolen-valor thirst traps from overseas accounts. Obviously, most are not that motivated and stick to pithy and simple content rather than intellectually grounded think pieces. The broader “right” is still needlessly ceding intellectual and data-creation ground to the left, despite occasional victories in defunding anti-civilizational NGOs and taking control of key platforms.

The other issue experienced by data creators across the political spectrum is the reliance on unpaid volunteers. As the economic belt inevitably tightens and productive people have less spare time, the supply of quality free data will worsen. It will also worsen as both platforms and users feel rightful indignation at their data being “stolen” by AI companies making huge profits, thus moving content into gatekept platforms like Discord. While X is unlikely to go back to the “left,” its quality can certainly fall farther.

Even Redditors and Wikipedia contributors provide fairly complex, if generally biased, data that powers the entire AI ecosystem. Also for free. A community of unpaid volunteers working to spread useful information sounds lovely in principle. However, in addition to the decay in quality, these kinds of “business models” are generally very easy to disrupt with minor infusions of outside money, if it just means paying a full-time person to post. If you are not paying to generate politically powerful content, someone else is always happy to.

The other dream of tech companies is to use AI to “re-create” the entirety of the pipeline. We have heard so much drivel about “solving cancer” and “solving science.” While speeding up human progress by automating simple tasks is certainly going to work and is already working, the dream of full replacement will remain a dream, largely because of “model collapse,” the situation where AIs degrade in quality when they are trained on data generated by themselves. Companies occasionally hype up “no data/zero-knowledge/synthetic data” training, but a big example from 10 years ago, “RL from random play,” which worked for chess and Go, went nowhere in games as complex as Starcraft.

So where does truth come from?

This brings us to the recent example of Grokipedia. Perusing it gives one a sense that we have taken a step in the right direction, with an improved ability to summarize key historical events and medical controversies. However, a number of articles are lifted directly from Wikipedia, which risks drawing the wrong lesson. Grokipedia can’t “replace” Wikipedia in the long term because Grok’s own summarization is dependent on it.

Like many of Elon Musk’s ventures, Grokipedia is two steps forward, one step back. The forward steps are a customer-facing Wikipedia that seems to be of higher quality and a good example of AI-generated long-form content that is not mere slop, achieved by automating the tedious, formulaic steps of summarization. The backward step is a lack of understanding of what the ecosystem looks like without Wikipedia. Many of Grokipedia’s articles are lifted directly from Wikipedia, suggesting that if Wikipedia disappears, it will be very hard to keep neutral articles properly updated.

Even the current version suffers from a “chicken and egg” source-of-truth problem. If no AI has the real facts about the COVID vaccine and categorically rejects data about its safety or lack thereof, then Grokipedia will not be accurate on this topic unless a fairly highly paid editor researches and writes the true story. As mentioned, model collapse is likely to result from feeding too much of Grokipedia to Grok itself (and other AIs), leading to degradation of quality and truthfulness. Relying on unpaid volunteers to suggest edits creates a very easy vector for paid NGOs to influence the encyclopedia.

The simple conclusion is that to be good training data for future AIs, the next source of truth must be written by people. If we want to scale this process and employ a number of trustworthy researchers, Grokipedia by itself is very unlikely to make money and will probably forever be a money-losing business. It would likely be both a better business and a better source of truth if, instead of being written by AI to be read by people, it was written by people to be read by AI.

Eventually, the domain of truth needs to be carefully managed, curated, and updated by a legitimate organization that, while not technically part of the government, would be endorsed by it. Perhaps a nonprofit NGO — except good and actually helping humanity. The idea of “the Foundation” or “Antiversity,” is not new, but our over-reliance on AI to do the heavy lifting is. Such an institution, or a series of them, would need to be bootstrapped by people willing to invest in our epistemic future for the very long term.

The Culture Wars Didn’t ‘Come For Wikipedia.’ Wikipedia Is Fueling Them With Lies

Congress should consider expanding its investigation of Wikipedia to cover not just the site's antisemitism but its active censorship.

Dem Nominee in Tennessee Special Election Smeared Her Own State As ‘Racist’

Tennessee is a racist state—at least according to Aftyn Behn, the Democratic nominee for the special election for Tennessee's Seventh Congressional District.

The post Dem Nominee in Tennessee Special Election Smeared Her Own State As ‘Racist’ appeared first on .

AI Idols Will Make Idiots Of Us All — If We Let Them

We're making utter fools of ourselves while claiming to have reached the apex of wisdom.

EXCLUSIVE: George Soros Gave $250K to British Group Working To Censor Conservative News Sites and ‘Kill Musk’s Twitter’

The left-wing philanthropy funded by George Soros, Open Society Foundations (OSF), bankrolls a British nonprofit that works to censor conservative news websites and social media companies, including through a plot to "kill" Elon Musk’s X by pressuring advertisers and investors to boycott the company.

The post EXCLUSIVE: George Soros Gave $250K to British Group Working To Censor Conservative News Sites and ‘Kill Musk’s Twitter’ appeared first on .

Artificial intelligence just wrote a No. 1 country song. Now what?



The No. 1 country song in America right now was not written in Nashville or Texas or even L.A. It came from code. “Walk My Walk,” the AI-generated single by the AI artist Breaking Rust, hit the top spot on Billboard’s Country Digital Song Sales chart, and if you listen to it without knowing that fact, you would swear a real singer lived the pain he is describing.

Except there is no “he.” There is no lived experience. There is no soul behind the voice dominating the country music charts.

If a machine can imitate the soul, then what is the soul?

I will admit it: I enjoy some AI music. Some of it is very good. And that leaves us with a question that is no longer science fiction. If a machine can fake being human this well, what does it mean to be human?

A new world of artificial experience

This is not just about one song. We are walking straight into a technological moment that will reshape everyday life.

Elon Musk said recently that we may not even have phones in five years. Instead, we will carry a small device that listens, anticipates, and creates — a personal AI agent that knows what we want to hear before we ask. It will make the music, the news, the podcasts, the stories. We already live in digital bubbles. Soon, those bubbles might become our own private worlds.

If an algorithm can write a hit country song about hardship and perseverance without a shred of actual experience, then the deeper question becomes unavoidable: If a machine can imitate the soul, then what is the soul?

What machines can never do

A machine can produce, and soon it may produce better than we can. It can calculate faster than any human mind. It can rearrange the notes and words of a thousand human songs into something that sounds real enough to fool millions.

But it cannot care. It cannot love. It cannot choose right and wrong. It cannot forgive because it cannot be hurt. It cannot stand between a child and danger. It cannot walk through sorrow.

A machine can imitate the sound of suffering. It cannot suffer.

The difference is the soul. The divine spark. The thing God breathed into man that no code will ever have. Only humans can take pain and let it grow into compassion. Only humans can take fear and turn it into courage. Only humans can rebuild their lives after losing everything. Only humans hear the whisper inside, the divine voice that says, “Live for something greater.”

We are building artificial minds. We are not building artificial life.

Questions that define us

And as these artificial minds grow sharper, as their tools become more convincing, the right response is not panic. It is to ask the oldest and most important questions.

Who am I? Why am I here? What is the meaning of freedom? What is worth defending? What is worth sacrificing for?

That answer is not found in a lab or a server rack. It is found in that mysterious place inside each of us where reason meets faith, where suffering becomes wisdom, where God reminds us we are more than flesh and more than thought. We are not accidents. We are not circuits. We are not replaceable.

RELATED: AI can fake a face — but not a soul

Seong Joon Cho/Bloomberg via Getty Images

The miracle machines can never copy

Being human is not about what we can produce. Machines will outproduce us. That is not the question. Being human is about what we can choose. We can choose to love even when it costs us something. We can choose to sacrifice when it is not easy. We can choose to tell the truth when the world rewards lies. We can choose to stand when everyone else bows. We can create because something inside us will not rest until we do.

An AI content generator can borrow our melodies, echo our stories, and dress itself up like a human soul, but it cannot carry grief across a lifetime. It cannot forgive an enemy. It cannot experience wonder. It cannot look at a broken world and say, “I am going to build again.”

The age of machines is rising. And if we do not know who we are, we will shrink. But if we use this moment to remember what makes us human, it will help us to become better, because the one thing no algorithm will ever recreate is the miracle that we exist at all — the miracle of the human soul.

Want more from Glenn Beck? Get Glenn's FREE email newsletter with his latest insights, top stories, show prep, and more delivered to your inbox.

Kennedy Heir and House Candidate Jack Schlossberg Performed Nazi Salute in Since-Deleted Swipe at Elon Musk

Jack Schlossberg, a Kennedy family scion and Democratic primary candidate for New York’s 12th Congressional District, performed a Nazi salute in a since-deleted Instagram video reviewed by the Washington Free Beacon.

The post Kennedy Heir and House Candidate Jack Schlossberg Performed Nazi Salute in Since-Deleted Swipe at Elon Musk appeared first on .