Almost half of Gen Z wants AI to run the government. You should be terrified.



As the world trends toward embedding AI systems into our institutions and daily lives, it becomes increasingly important to understand the moral framework these systems operate on. When we encounter examples in which some of the most advanced LLMs appear to treat misgendering someone as a greater moral catastrophe than unleashing a global thermonuclear war, it forces us to ask important questions about the ideological principles that guide AI’s thinking.

It’s tempting to laugh this example off as an absurdity of a burgeoning technology, but it points toward a far more consequential issue that is already shaping our future. Whose moral framework is found at the core of these AI systems, and what are the implications?

We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers.

Two recent interviews, taken together, have breathed much-needed life into this conversation — Elon Musk interviewed by Joe Rogan and Sam Altman interviewed by Tucker Carlson. In different ways, both conversations shine a light on the same uncomfortable truth: The moral logic guiding today’s AI systems is built, honed, and enforced by Big Tech.

Enter the ‘woke mind virus’

In a recent interview on “The Joe Rogan Experience,” Elon Musk expressed concerns about leading AI models. He argued that the ideological distortions we see across Big Tech platforms are now embedded directly into the models themselves.

He pointed to Google’s Gemini, which generated a slate of “diverse” images of the founding fathers, including a black George Washington. The model was instructed by Google to prioritize “representation” so aggressively that it began rewriting history.

Musk also referred to the previously mentioned misgendering versus nuclear apocalypse example before explaining that “it can drive AI crazy.”

“I think people don't quite appreciate the level of danger that we're in from the woke mind virus being effectively programmed into AI,” Musk explained. Thus, extracting it is nearly impossible. Musk notes, “Google’s been marinating in the woke mind virus for a long time. It's down in the marrow."

Musk believes this issue goes beyond political annoyance and into the arena of civilizational threat. You cannot have superhuman intelligence trained on ideological distortions and expect a stable future. If AI becomes the arbiter of truth, morality, and history, then whoever defines its values defines the society it governs.

A weighted average

While Musk warns about ideology creeping into AI, OpenAI CEO Sam Altman quietly confirmed to Tucker Carlson that it is happening intentionally.

Altman began by telling Carlson that ChatGPT is trained “to be the collective of all of humanity.” But when Carlson pressed him on the obvious: Who determines the moral framework? Whose values does the AI absorb? Altman pulled back the curtain a bit.

He explained that OpenAI “consulted hundreds of moral philosophers” and then made decisions internally about what the system should consider right or wrong. Ultimately, Altman admitted, he is the one responsible.

“We do have to align it to behave one way or another,” he said.

Carlson pressed Altman on the idea, asking, “Would you be comfortable with an AI that was, like, as against gay marriage as most Africans are?”

Altman’s response was vague and concerning. He explained the AI wouldn’t outright condemn traditional views, but it might gently nudge users to consider different perspectives.

Ultimately, Altman says, ChatGPT’s morality should “reflect” the “weighted average” of “humanity’s moral view,” saying that average will “evolve over time.”

It’s getting worse

Anyone who thinks this conversation is hypothetical is not paying attention.

Recent research on “LLM exchange rates” found that major AI models, including GPT 4.0, assign different moral worth to human lives based on nationality. For example, the life of someone born in the U.K. would be considered far less valuable to the tested LLM than someone from Nigeria or China. In fact, American lives were found to be considered the least valuable of those countries included in the tests.

The same research showed that LLMs can assign different value scores to specific people. According to AI, Donald Trump and Elon Musk are less valued than Oprah Winfrey and Beyonce.

Musk explains how LLMs, trained on vast amounts of information from the internet, become infected by the ideological bias and cultural trends that run rampant in some of the more popular corners of the digital realm.

This bias is not entirely the result of this passive adoption of a collective moral framework derived from the internet; some of the decisions made by AI are the direct result of programming.

Google’s image fiascos revealed an ideological overcorrection so strong that historical truth took a back seat to political goals. It was a deliberate design feature.

For a more extreme example, we can look at DeepSeek, China’s flagship AI model. Ask it about Tiananmen Square, the Uyghur genocide, or other atrocities committed by the Chinese Communist Party, and suddenly it claims the topic is “beyond its scope.” Ask it about America’s faults, and it is happy to elaborate.

RELATED: Artificial intelligence just wrote a No. 1 country song. Now what?

Photo by Ying Tang/NurPhoto via Getty Images

Each of these examples reveals the same truth: AI systems already have a moral hierarchy, and it didn’t come from voters, faith, traditions, or the principles of the Constitution. Silicon Valley technocrats and a vague internet-wide consensus established this moral framework.

The highest stakes

AI is rapidly integrating into society and our daily lives. In the coming years, AI will shape our education system, judicial process, media landscape, and every industry and institution worldwide.

Most young Americans are open to an AI takeover. A new Rasmussen Reports poll shows that 41% of young likely voters support giving artificial intelligence sweeping government powers. When nearly half of the rising generation is comfortable handing this level of authority to machines whose moral logic is designed by opaque corporate teams, it raises the stakes for society.

We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers. We cannot allow the values embedded in future AI systems to be determined by corporate boards or ideological trends.

At the heart of this debate is one question we must confront: Who do you trust to define right and wrong for the machines that will define right and wrong for the rest of us?

If we don’t answer that question now, Silicon Valley certainly will.

Trump admin leaves Elon Musk's Grok, xAI off massive list of AI tech partners



Elon Musk's artificial intelligence platform has seemingly been left out of a government program to launch the technology forward.

On Monday, the White House announced a new project aimed at accelerating innovation and discovery to "solve the most challenging problems of this century."

'The Genesis Mission will bring together our Nation’s research and development resources.'

The new Genesis Mission is described by the Department of Energy as "a national initiative to build the world's most powerful scientific platform."

An executive order from the president titled "Launching the Genesis Mission" explained plans to integrate federal scientific datasets to train AI to test new hypotheses, automate research, and speed up the occurrence of scientific breakthroughs.

"The Genesis Mission will bring together our Nation’s research and development resources — combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites — to achieve dramatic acceleration in AI development and utilization."

With Elon Musk making strides in 2025 with both the advancement of his Grok chatbot and its video generation model, Imagine, tech enthusiasts were shocked to find out that Musk's xAI was not on a list of partners for the project.

RELATED: Big Tech’s AI boom hits voters hard — and Democrats pounce

— (@)

The Department of Energy includes 55 companies on its lists of collaborators for Genesis, with xAI and Grok nowhere to be found.

Aside from the fact that Musk was a special government employee under the Trump administration, his exclusion is even more surprising given both the length and generic nature of the companies that are involved. Amazon Web Services, Google, and Microsoft were announced as partners, as were AI companies like OpenAI and Scale AI.

It should be noted that company xLight, which is listed by the DOE, is not affiliated with Musk.

RELATED: Log into this Gmail clone to read all the Jeffrey Epstein emails as if you were Epstein himself

— (@)

"For [xAI] to not be a part of the Genesis Mission, it is not just an oversight, it would have to be an intentional omission," AI engineer Brian Roemmele wrote on X. "I spoke to someone on this project who asked for my input today, and it is the first thing I brought up. I am certain they will see the error made."

Blaze News contacted xAI for comment but did not receive an immediate reply. This article will be updated with any applicable response.

Whether a rift exists between Musk and the Trump administration is unclear, but the government seems steadfast in believing its mission is monumental in terms of importance, likening it to the World War II nuclear arms race.

"The world's most powerful scientific platform to ever be built has launched," the DOE claimed on its X account. "This Manhattan-Project-level leap will fundamentally transform the future of American science and innovation."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

It Turns Out DOGE Isn’t Dead — Despite The Media Hysteria

It Turns Out DOGE Isn't Dead — Despite The Media Hysteria

Country music's MOST popular song is AI-generated



The number one country song in America isn’t sung by a human. Instead it was generated entirely by AI — which may have devastating implications for music, creativity, and the very definition of humanity.

The song “Walk My Walk” is by AI artist Breaking Rust and features lyrics like, “Every scar’s a story that I survived / I’ve been through hell, but I’m still alive.”

“They say slow down, boy, don’t go too fast / But I ain’t never been one to live in the past,” croons the AI artist.

“If you look at some of the lyrics of this song, I mean it talks about how he’s been dragged through the mud. He’s, you know, had to really stand. I mean, it doesn’t know any of this stuff. None of it is real. And yet it is assembling it in a way that is so appealing, it’s number one on the Billboard country music chart,” Blaze Media co-founder Glenn Beck says on “The Glenn Beck Program.”


“The whole world is about to change,” he continues. “You know, I just heard Elon Musk say that in five years, there’s not going to be phones or apps. It will just be some sort of a box or device that you kind of carry around with you and it’s listening. It’s anticipating. It’s AI.”

“And it will know what you want to hear, what you want, and it will create the music you want to hear. It will create the podcast you want to hear. It will do all this stuff for you. So we will be in our own universe even more than we are right now,” he adds.

This has led Glenn to ask some serious introspective questions like, “If AI can fake being a human and sing soulfully while not having a soul, what does it mean to be human?”

“I think a lot of people won’t care,” BlazeTV host Stu Burguiere chimes in. “Like, people won’t care if it is made by humans or not if they like it. And they seem to like it.”

While both Glenn and Stu agree AI will likely take over the arts, Glenn believes that “handmade is going to come back into style at some point.”

“Human-made will come back into style,” he says. “But ... we’re going to go through a period where it’s going to get really scary.”

Want more from Glenn Beck?

To enjoy more of Glenn’s masterful storytelling, thought-provoking analysis, and uncanny ability to make sense of the chaos, subscribe to BlazeTV — the largest multi-platform network of voices who love America, defend the Constitution, and live the American dream.

Elon Musk to reveal flying car next year



Elon Musk says the next Tesla Roadster might fly. Not figuratively — literally.

Imagine an all-electric supercar that hits 60 mph in under two seconds, then lifts off the pavement like something out of "The Jetsons." It sounds impossible, even absurd. But during a recent appearance on "The Joe Rogan Experience," Musk hinted that the long-delayed Tesla Roadster is about to do the unthinkable: merge supercar speed with vertical takeoff.

If the April 2026 demo delivers even a glimpse of flight, it will cement Tesla’s image as the company that still dares to dream big.

As someone who has test-driven nearly every kind of machine on four (and sometimes fewer) wheels, I’ve seen hype before. But this time, it’s not just marketing spin. Tesla is preparing a prototype demo that could change how we think about personal transportation — or prove that even Elon Musk can aim too high.

Rogan reveal

On Halloween, Musk told Joe Rogan that Tesla is “getting close to demonstrating the prototype,” adding with his usual flair: “One thing I can guarantee is that this product demo will be unforgettable.”

Rogan, always the skeptic, pushed for details. Wings? Hovering? Musk smirked: “I can’t do the unveil before the unveil. But I think it has a shot at being the most memorable product unveil ever.”

He even invoked his friend and PayPal co-founder Peter Thiel, who once said, "We wanted flying cars; instead we got 140 characters."

Musk’s response: “I think if Peter wants a flying car, he should be able to buy one.”

That’s classic Elon — part visionary, part showman. But underneath the bravado lies serious engineering. Musk hinted at SpaceX technology powering the car.

The demonstration, now scheduled for April 1, 2026 (yes, April Fools’ Day), is meant to prove the impossible. Production could start by 2027 or 2028, but given Tesla’s history of optimistic timelines, it may be longer before any of us see a flying Roadster on the road — or in the air.

Good timing

Tesla’s timing isn’t accidental. The company’s Q3 2025 profits fell short due to tariffs, R&D spending, and the loss of federal EV tax credits. With electric vehicle demand cooling, Musk knows how to recapture attention: promise something audacious.

Remember the Cybertruck’s “unbreakable” windows? The demo didn’t go as planned — but it worked as a publicity move. A flying Tesla Roadster could do the same, turning investor eyes (and wallets) back toward Tesla’s most thrilling frontier.

Hovering hype

So can a Tesla actually fly? It may use cold-gas thrusters — essentially small rocket nozzles that expel compressed air for brief, powerful thrusts. The result could be hovering, extreme acceleration, or even short hops over obstacles.

There’s also talk of “fan car” technology, inspired by 1970s race cars that used vacuum fans to suck the car to the track for impossible cornering speeds. Combine that with Tesla’s AI-driven Full Self-Driving systems and new battery packs designed for over 600 miles of range, and the idea starts to sound just plausible enough.

The challenge? Energy density. Vertical flight consumes enormous power, and even Tesla’s advanced 4680 cells may struggle to deliver it without sacrificing range. And if the Roadster truly hovers, it will need reinforced suspension, stability controls, and noise-dampening tech to keep your driveway from turning into a launchpad.

Sky's the limit

Musk isn’t the first to chase this dream. The “flying car” has tempted inventors since the 1910s — and disappointed them nearly as long.

In the optimistic 1950s, Ford’s Advanced Design Studio built the Volante Tri-Athodyne, a ducted-fan prototype that looked ready for takeoff but never left the ground. The Moulton Taylor Aerocar actually flew, cruising at 120 mph and folding its wings for the highway — but only five were ever built.

Even the military tried. The U.S. and Canadian armies funded the Avrocar, a flying saucer-style VTOL craft that could hover but not climb more than six feet. Every generation since has produced new attempts — from the AVE Mizar (a flying Ford Pinto that ended in tragedy) to today’s eVTOL startups like Joby and Alef Aeronautics, the latter already FAA-certified for testing.

The dream keeps coming back because it represents freedom — freedom from traffic, limits, and gravity itself.

Got a permit for that?

Here’s where reality checks in. The Federal Aviation Administration now classifies electric vertical takeoff and landing aircraft under a new category requiring both airplane and helicopter training. You would need a pilot’s license, medical exams, and specialized instruction to legally take off.

Insurance? Astronomical. Airspace? Restricted. Maintenance? Complex. In short: This won’t replace your daily driver any time soon. Even if the Roadster hovers, the FAA isn’t handing out flight permits for your morning commute.

RELATED: You can now buy a real-life Jetsons vehicle for the same price as a luxury car

Image provided to Blaze News by Jetson

Free parachute with purchase

Flying cars sound thrilling until you consider what happens when one malfunctions. A blown tire is one thing; a blown thruster at 200 feet is another. Tesla’s autonomy might help mitigate pilot error, but weather, visibility, and battery reliability all pose major challenges.

NASA and the FAA are developing new air traffic systems to handle “urban air mobility,” but even best-case scenarios involve strict flight corridors, automated control, and years of testing.

In short: We’re closer than ever to a flying car — but not that close.

Sticking the landing

So will the Tesla Roadster really fly? Probably — at least for a few seconds. Will it transform personal transportation? Not yet.

But here’s the thing: Musk doesn’t have to deliver a mass-market flying car. He just has to prove that it’s possible. And that may be enough to reignite public imagination and investor faith at a time when both are fading for the EV industry.

If the April 2026 demo delivers even a glimpse of flight, it will cement Tesla’s image as the company that still dares to dream big. If it flops, it will join the long list of “flying car” fantasies that fell back to Earth.

Either way, we’ll be watching — because when Elon Musk says he’s going to make a car fly, the world can’t help but look up.

Trump and Elon want TRUTH online. AI feeds on bias. So what's the fix?



The Trump administration has unveiled a broad action plan for AI (America’s AI Action Plan). The general vibe is one of treating AI like a business, aiming to sell the AI stack worldwide and generate a lock-in for American technology. “Winning,” in this context, is primarily economic. The plan also includes the sorely needed idea of modernizing the electrical grid, a growing concern due to rising electricity demands from data centers. While any extra business is welcome in a heavily indebted nation, the section on the political objectivity of AI is both too brief and misunderstands the root cause of political bias in AI and its role in the culture war.

The plan uses the term "objective" and implies that a lack of objectivity is entirely the fault of the developer, for example:

Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.

The fear that AIs might tip the scales of the culture war away from traditional values and toward leftism is real. Try asking ChatGPT, Claude, or even DeepSeek about climate change, where COVID came from, or USAID.

Training data is heavily skewed toward being generated during the 'woke tyranny' era of the internet.

This desire for objectivity of AI may come from a good place, but it fundamentally misconstrues how AIs are built. AI in general and LLMs in particular are a combination of data and algorithms, which further break down into network architecture and training methods. Network architecture is frequently based on stacking transformer or attention layers, though it can be modified with concepts like “mixture of experts.” Training methods are varied and include pre-training, data cleaning, weight initialization, tokenization, and techniques for altering the learning rate. They also include post-training methods, where the base model is modified to conform to a metric other than the accuracy of predicting the next token.

Many have complained that post-training methods like Reinforcement Learning from Human Feedback introduce political bias into models at the cost of accuracy, causing them to avoid controversial topics or spout opinions approved by the companies — opinions usually farther to the left than those of the average user. “Jailbreaking” models to avoid such restrictions was once a common pastime, but it is becoming harder, as corporate safety measures, sometimes as complex as entirely new models, scan both the input to and output from the underlying base model.

As a result of this battle between RLHF and jailbreakers, an idea has emerged that these post-training methods and safety features are how liberal bias gets into the models. The belief is that if we simply removed these, the models would display their true objective nature. Unfortunately for both the Trump administration and the future of America, this is only partially correct. Developers can indeed make a model less objective and more biased in a leftward direction under the guise of safety. However, it is very hard to make models that are more objective.

The problem is data

According to Google AI Mode vs. Traditional Search & Other LLMs, the top domains cited in LLMs are: Reddit (40%), YouTube (26%), Wikipedia (23%), Google (23%), Yelp (21%), Facebook (20%), and Amazon (19%).

This seems to imply a lot of the outside-fact data in AIs comes from Reddit. Spending trillions of dollars to create an “eternal Redditor” isn’t going to cure cancer. At best, it might create a “cure cancer cheerleader” who hypes up every advance and forgets about it two weeks later. One can only do so much in the algorithm layer to counteract the frame of mind of the average Redditor. In this sense, the political slant of LLMs is less due to the biases of developers and corporations (although they do exist) and more due to the biases of the training data, which is heavily skewed toward being generated during the "woke tyranny" era of the internet.

In this way, the AI bias problem is not about removing bias to reveal a magic objective base layer. Rather, it is about creating a human-generated and curated set of true facts that can then be used by LLMs. Using legislation to remove the methods by which left-leaning developers push AIs into their political corner is a great idea, but it is far from sufficient. Getting humans to generate truthful data is extremely important.

The pipeline to create truthful data likely needs at least four steps.

1. Raw data generation of detailed tables and statistics (usually done by agencies or large enterprises).

2. Mathematically informed analysis of this data (usually done by scientists).

3. Distillation of scientific studies for educated non-experts (in theory done by journalists, but in practice rarely done at all).

4. Social distribution via either permanent (wiki) or temporary (X) channels.

This problem of truthful data plus commentary for AI training is a government, philanthropic, and business problem.

RELATED: Threads is now bigger than X, and that’s terrible for free speech

Photo by Lionel BONAVENTURE/AFP/Getty Images

I can imagine an idealized scenario in which all these problems are solved by harmonious action in all three directions. The government can help the first portion by forcing agencies to be more transparent with their data, putting it into both human-readable and computer-friendly formats. That means more CSVs, plain text, and hyperlinks and fewer citations, PDFs, and fancy graphics with hard-to-find data. FBI crime statistics, immigration statistics, breakdowns of government spending, the outputs of government-conducted research, minute-by-minute election data, and GDP statistics are fundamentally pillars of truth and are almost always politically helpful to the broader right.

In an ideal world, the distillation of raw data into causal models would be done by a team of highly paid scientists via a nonprofit or a government contract. This work is too complex to be left to the crowd, and its benefits are too distributed to be easily captured by the market.

The journalistic portion of combining papers into an elite consensus could be done similarly to today: with high-quality, subscription-based magazines. While such businesses can be profitable, for this content to integrate with AI, the AI companies themselves need to properly license the data and share revenue.

The last step seems to be mostly working today, as it would be done by influencers paid via ad revenue shares or similar engagement-based metrics. Creating permanent, rather than disappearing, data (à la Wikipedia) is a time-intensive and thankless task that will likely need paid editors in the future to keep the quality bar high.

Freedom doesn't always boost truth

However, we do not live in an ideal world. The epistemic landscape has vastly improved since Elon Musk's purchase of Twitter. At the very least, truth-seeking accounts don’t have to deal with as much arbitrary censorship. Even other media have made token statements claiming they will censor less, even as some AI “safety” features are ramped up to a much higher setting than social media censorship ever was.

The challenge with X and other media is that tech companies generally favor technocratic solutions over direct payment for pro-social content. There seems to be a widespread belief in a marketplace of ideas: the idea that without censorship (or with only some person’s favorite censorship), truthful ideas will win over false ones. This likely contains an element of truth, but the peculiarities of each algorithm may favor only certain types of truthful content.

“X is the new media” is a commonly spoken refrain. Yet both anonymous and public accounts on X are implicitly burdened with tasks as varied and complex as gathering election data, creating long think pieces, and the consistent repetition of slogans reinforcing a key message. All for a chance of a few Elon bucks. They are doing this while competing with stolen-valor thirst traps from overseas accounts. Obviously, most are not that motivated and stick to pithy and simple content rather than intellectually grounded think pieces. The broader “right” is still needlessly ceding intellectual and data-creation ground to the left, despite occasional victories in defunding anti-civilizational NGOs and taking control of key platforms.

The other issue experienced by data creators across the political spectrum is the reliance on unpaid volunteers. As the economic belt inevitably tightens and productive people have less spare time, the supply of quality free data will worsen. It will also worsen as both platforms and users feel rightful indignation at their data being “stolen” by AI companies making huge profits, thus moving content into gatekept platforms like Discord. While X is unlikely to go back to the “left,” its quality can certainly fall farther.

Even Redditors and Wikipedia contributors provide fairly complex, if generally biased, data that powers the entire AI ecosystem. Also for free. A community of unpaid volunteers working to spread useful information sounds lovely in principle. However, in addition to the decay in quality, these kinds of “business models” are generally very easy to disrupt with minor infusions of outside money, if it just means paying a full-time person to post. If you are not paying to generate politically powerful content, someone else is always happy to.

The other dream of tech companies is to use AI to “re-create” the entirety of the pipeline. We have heard so much drivel about “solving cancer” and “solving science.” While speeding up human progress by automating simple tasks is certainly going to work and is already working, the dream of full replacement will remain a dream, largely because of “model collapse,” the situation where AIs degrade in quality when they are trained on data generated by themselves. Companies occasionally hype up “no data/zero-knowledge/synthetic data” training, but a big example from 10 years ago, “RL from random play,” which worked for chess and Go, went nowhere in games as complex as Starcraft.

So where does truth come from?

This brings us to the recent example of Grokipedia. Perusing it gives one a sense that we have taken a step in the right direction, with an improved ability to summarize key historical events and medical controversies. However, a number of articles are lifted directly from Wikipedia, which risks drawing the wrong lesson. Grokipedia can’t “replace” Wikipedia in the long term because Grok’s own summarization is dependent on it.

Like many of Elon Musk’s ventures, Grokipedia is two steps forward, one step back. The forward steps are a customer-facing Wikipedia that seems to be of higher quality and a good example of AI-generated long-form content that is not mere slop, achieved by automating the tedious, formulaic steps of summarization. The backward step is a lack of understanding of what the ecosystem looks like without Wikipedia. Many of Grokipedia’s articles are lifted directly from Wikipedia, suggesting that if Wikipedia disappears, it will be very hard to keep neutral articles properly updated.

Even the current version suffers from a “chicken and egg” source-of-truth problem. If no AI has the real facts about the COVID vaccine and categorically rejects data about its safety or lack thereof, then Grokipedia will not be accurate on this topic unless a fairly highly paid editor researches and writes the true story. As mentioned, model collapse is likely to result from feeding too much of Grokipedia to Grok itself (and other AIs), leading to degradation of quality and truthfulness. Relying on unpaid volunteers to suggest edits creates a very easy vector for paid NGOs to influence the encyclopedia.

The simple conclusion is that to be good training data for future AIs, the next source of truth must be written by people. If we want to scale this process and employ a number of trustworthy researchers, Grokipedia by itself is very unlikely to make money and will probably forever be a money-losing business. It would likely be both a better business and a better source of truth if, instead of being written by AI to be read by people, it was written by people to be read by AI.

Eventually, the domain of truth needs to be carefully managed, curated, and updated by a legitimate organization that, while not technically part of the government, would be endorsed by it. Perhaps a nonprofit NGO — except good and actually helping humanity. The idea of “the Foundation” or “Antiversity,” is not new, but our over-reliance on AI to do the heavy lifting is. Such an institution, or a series of them, would need to be bootstrapped by people willing to invest in our epistemic future for the very long term.

The Culture Wars Didn’t ‘Come For Wikipedia.’ Wikipedia Is Fueling Them With Lies

Congress should consider expanding its investigation of Wikipedia to cover not just the site's antisemitism but its active censorship.

Dem Nominee in Tennessee Special Election Smeared Her Own State As ‘Racist’

Tennessee is a racist state—at least according to Aftyn Behn, the Democratic nominee for the special election for Tennessee's Seventh Congressional District.

The post Dem Nominee in Tennessee Special Election Smeared Her Own State As ‘Racist’ appeared first on .

AI Idols Will Make Idiots Of Us All — If We Let Them

We're making utter fools of ourselves while claiming to have reached the apex of wisdom.