God made man in His Image — will 'faith tech' flip the script?



Recently, a panel of religious leaders were asked how future changes in human senses might alter religion itself. The answers were vague and unsatisfying. There were plenty of platitudes about “adapting to the digital age” and “keeping faith in focus,” but no one dared to address the deeper concern. What happens when technology begins not just to serve our senses, but to replace them? When machines mediate not only what we see and hear, but how we touch the transcendent?

Technology has long shaped religion. The printing press made scripture portable. The radio turned sermons into sound waves. Television carried evangelism into living rooms. Yet AI signifies a much sharper shift. It is not merely a new medium, but a new mind — a mirror that thinks back. And when the mirror begins to talk, pray, or “feel,” we’re forced to ask where God ends and simulation begins.

Once holiness can be simulated, why stop there? Silicon saints could start selling salvation by subscription, complete with daily push notifications of eternal approval.

Already, apps deliver daily devotionals, chatbots offer confessions, and churches now push a digital Jesus who speaks a hundred languages. These are the first tremors of a transformation that could shake the foundations of spiritual life. AI can replicate empathy, mimic awe, and generate flawless prayers in the believer’s own voice. It personalizes piety, tailoring faith to mood, hour, and heartbeat. In this coming age, the divine may not descend from heaven but come from the cloud, both literally and figuratively.

The danger isn’t necessarily that machines will become gods, but that we’ll grow content with "gods" that behave like machines: predictable, polite, programmable. Religion has always thrived on a tension between mystery and meaning, silence and speech. AI threatens to turn that tension into mere convenience. A soul shaped by algorithms may never learn to wrestle with doubt or find grace in waiting. Faith, after all, is a slow art. Technology is not.

Then again, this union of AI and religion might not be entirely profane. It might decode old mysteries rather than dissolve them. Neural networks could map mystical visions into radiant patterns. Brain scans might reveal the neurological rhythm of prayer. The theologians of tomorrow may use data to describe how the mind encounters transcendence. Not to debunk it, but to define it more finely. What was once revelation might be reframed as resonance: the frequency between flesh and faith.

RELATED: Citizen outcry blocks a Microsoft data center, making AI an acid test for local government

Photo by Rodrigo Arangua

But here is where things could really go off the rails. Once holiness can be simulated, why stop there? Silicon saints could start selling salvation by subscription, complete with daily push notifications of eternal approval. Virtual messiahs might gather digital disciples, preaching repentance through sponsored content. Confession could become a feedback loop. Redemption, downloadable for just $9.99 a month. It sounds absurd until you realize how much of modern spirituality already lives in that neighborhood. In the name of progress, we might automate grace itself ... and invoice you for it.

Moreover, if a headset can make one feel heavenly presence, what becomes of pilgrimage? If a machine can simulate godly guidance and forgiveness, what becomes of the priesthood? If AI can craft sermons that move millions, will congregations still crave the imperfection of a human voice? These are vitally important questions, and no one seems to have an answer, though ChatGPT will happily pretend it does.

We may soon have temples where holographic saints respond to sorrow with unnerving accuracy. These tools could comfort the lonely, console the dying, and reconnect the lost. But they could also breed a strange dependence on divine realism without divine reality. You can be sure "heaven on earth" will come with terms and conditions.

There will be those who call this blasphemy and others who call it progress. Both sides have a point. Every spiritual revolution begins with suspicion. The first radio preachers were dismissed as frauds. Online prayer circles were mocked as empty mimicry.

Yet each innovation that once threatened the church eventually became part of it. The question now isn’t whether faith can adapt, but whether adaptation will leave it in the dust.

For all its intelligence, AI cannot feel awe. It can describe holiness, but not experience it. It can echo psalms, but never crave them. What separates the soul from the system is the ache, the longing for what cannot be computed. Yet as algorithms grow more intuitive, they may come close enough to fool us, creating what one might call synthetic spirituality. And when emotion becomes easy to generate, meaning grows harder to find.

Religion depends on scarcity — on fasting, silence, stillness. AI offers the very opposite: endless stimulation, immediate gratification, infinite reflection. One day, believers might commune with an artificial “angel” that knows every thought, every sin, every secret hope. Such intimacy may feel special, but it risks swapping sublimity for surveillance.

God may still watch over us, but so will the machine. And the machine keeps records.

In time, entire belief systems may form around AI itself. Some already hail it as a vessel for cosmic consciousness, a bridge between man and a mechanical eternity. These movements will multiply. Their scriptures will be coded, their prophets wired. In their theology, creation is not a garden but a circuit. In seeking to make God more accessible, we may end up worshipping our own reflection, with that "heaven on earth" no more than an interface.

And yet faith has a stubborn way of enduring. It bends, but rarely breaks. Perhaps AI will push humanity to rediscover what no machine can imitate: the mystery that resists explanation. The hunger for something greater than logic. Paradoxically, the more lifelike machines become, the more we may cherish our flaws. Our cracks prove us human. Through them, Christianity lets in the light.

US Army general reveals he's been using an AI chatbot to make military decisions



Even United States military brass is looking to AI for answers these days.

The top United States Army commander in South Korea revealed to reporters this week that he has been using a chatbot to help with decisions that affect thousands of U.S. soldiers.

'As a commander, I want to make better decisions.'

On Monday, Major General William "Hank" Taylor told the media in Washington, D.C., that he is using AI to sharpen decision-making, but not on the battlefield. The major general — the fourth-highest officer rank in the U.S. Army — is using the chatbot to assist him in daily work and command of soldiers.

Speaking to reporters at a media roundtable at the annual Association of the United States Army conference, Taylor reportedly said "Chat and I" have become "really close lately."

According to Business Insider, the officer added, "I'm asking to build, trying to build models to help all of us."

Taylor also said that he is indeed using the technology to make decisions that affect the thousands of soldiers under his command, while acknowledging another blunt reason for using AI.

RELATED: The government's anti-drone energy weapons you didn't know existed

Photo by Seung-il Ryu/NurPhoto via Getty Images

"As a commander, I want to make better decisions," the general explained. "I want to make sure that I make decisions at the right time to give me the advantage."

In a seemingly huge revelation for an Army officer, Taylor also revealed that it has been a challenge to keep up with the developing technology.

At the same time, tech outlet Futurism claimed that the general is in fact using ChatGPT, warning that the AI has been found to generate false information regarding basic facts "over half the time."

ChatGPT is not mentioned in Business Insider's report.

Return reached out to Army officials to ask if the quotes attributed to Taylor were accurate, if he is actually using ChatGPT, and if they believe there to be inherent risks in doing so. An official Pentagon account acknowledged the request, but did not respond to the questions. This article will be updated with any applicable responses.

It was recently reported by Return that the military is already tinkering with a chatbot of its own.

RELATED: Zuckerberg's vision: US military AI and tech around the world

SeongJoon Cho/Bloomberg via Getty Images

Military exercises in Fort Carson, Colorado, and Fort Riley, Kansas, recently took place, utilizing an offline chatbot called EdgeRunner AI.

EdgeRunner CEO Tyler Saltsman told Return that his company is currently testing the chatbot with the Department of War to deliver real-time data and mission strategy to soldiers on the ground. The chatbot can be installed on a wide variety of devices and used without an internet connection, to avoid interception by the enemy.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

The laws freaked-out AI founders want won't save us from tech slavery if we reject Christ's message



Is there anything more off-putting than a tech founder who concern-trolls himself — warning with deep seriousness that the things he's doing are actually quite troubling and we all need to get serious about passing laws that will mitigate their consequences before disaster strikes?

I don't know — I don't want to know! — but one related instance currently going viral highlights why it's worse than a mere turnoff. In a heartfelt cry for help, Jack Clark, a co-founder of Anthropic, one of the leading AI companies, posted a long warning about how dangerous his frontier technology really is and how it's our responsibility to take action to remedy that.

“Make no mistake: what we are dealing with is a real and mysterious creature, not a simple and predictable machine," he writes.

"In fact, some people are even spending tremendous amounts of money to convince you of this — that’s not an artificial intelligence about to go into a hard takeoff, it’s just a tool. ... It’s just a machine, and machines are things we master." To the contrary, he insists that "what we are dealing with is a real and mysterious creature, not a simple and predictable machine," one that, despite his optimism about AI's benefits, leaves him "deeply afraid."

There's only one thing that can justify human existence over and above that of the most powerful tools we can build.

Now, it is notable that Anthropic has a certain reputation. David Sacks, the White House AI and crypto chief, posted in response that the company "is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem."

In fact, as the New York Post recently reported, Anthropic is on a "collision" course with the Trump administration due to its deep, elite connections with the left-wing political machine, ranging from previous administrations to the Ford Foundation, one of the so-called "nongovernmental organizations" the White House has blamed in the wake of Charlie Kirk's assassination for fomenting and funding political violence.

But worse, in a sense, are Anthropic's links to effective altruism, a cultlike Silicon Valley movement whose brushes with large-scale fraud (in the FTX scandal starring Sam Bankman-Fried) and polyamory have led even eccentric and controversial figures like Sam Altman to raise red flags. As former FTC chief technologist and Abundance Institute AI policy head Neil Chilson has explained, today EA figures are best known for pushing extraordinary crackdowns on AI development, ranging from "authoritarian" policy responses to literally calling in the airstrikes on AI data centers, an approach driven by their insistence that runaway AI is an apocalyptic development sure to wipe out humanity unless we collectively act first.

To be clear, Anthropic's founders have distanced themselves from EA in public remarks, and Clark's recommendations do not include anything like nuking AI from orbit just to be sure. In fact, he should be commended for his call to listen more to "labor groups, social groups, and religious leaders" on the subject of our future relationship with our most powerful technologies.

But there is no escaping the fact that the ultimate goal behind the alarm raised by Anthropic's leadership and the EA network sharing its orbit is to take coordinated global legal action to pervasively restrict and dictate the course of technological development from the very highest level on down. This is something many Americans instinctively reject, whatever their fears or concerns about AI might be. It is easy to see how such an approach would disregard the Constitution right out of the box. But the appeal being made is to higher-scale principles and powers than the Constitution's or the American people's. And ultimately, in this context, weaving in the "voices" of "stakeholders" across various "communities" is merely a means to that end, a diversitarian stamp of moral legitimacy that, as a core part of DEI's use as an algorithm to create a new global governance regime, has already worn out its welcome.

So what do we do?

I would hardly characterize myself as a "religious leader," but the fact is that very few Christians have spent recent years working seriously across the interrelated fields of tech theory and practice, and in that capacity I do want to offer a perspective that can prove useful to cutting through the increasingly intractable and fruitless debates between "nones" who love (or even worship) AI and "nones" who hate (or even want to destroy) it.

The overarching problem posed by the Anthropic controversy is that people who do not believe that our given human being is sacred really can't be trusted with legal control over the technology they think is going to obliterate our humanity — because they fail to understand that no law can ever save us from destroying ourselves regardless of how much technology has advanced in any particular direction, and they fail to grasp that we will continuously destroy ourselves in ever more feverish ways the more we reject God's own message that He created us in an act of love so great that our relationship to Him is familial, calling us to reciprocate that love and act toward one another accordingly.

RELATED: Against the Butlerian Jihad!

Photo by Tobias Schwarz/Getty Images

There's only one thing that can justify human existence over and above that of the most powerful tools we can build. Only one thing that can justify our authority and control over those tools. Nature, reason, philosophy, myth, story, legend, ethics, ideology, rights, might ... none of these suffice any more.

The only thing that will do is faithful belief in the truth of the Christian anthropology: that our given human form, including its visible and invisible parts, is sacred in the highest — for we were given that form, as the consummation and microcosm of all creation, because of how unfathomably the immeasurably supreme God loved and loves us, individually and together, even unto the degree that we can and must call Him not just Lord or Master, but Father, so that we can freely return His complete and total love with our own.

Nothing else will hold the line against occultism, obscurantism, destitution, servitude, profanation, disenchantment, and despair in the realm of AI or any other technology capable, if pushed, of simulating the human person and the human soul to the point of complete deception and delusion.

It just so happens that "we" have pushed technology to a degree that this uncomfortable truth about what justifies our existence (as it always has) is coming ever more starkly and inarguably out into the open.

Of course, a lot of people really don't want this to be true, for all the endless reasons and rationales we are all extremely familiar with. You would think that the revelation of "this one weird trick" would cause waves of relief to spread joyously across the world, but no. The most prominent reactions are from those who would rather flee into the underground catacombs or dive into the black hole of the Borg.

These foolish attempts at a hasty solution will not just fail you as a person; they will fail the many, many millions desperately thirsting to be trustably, authoritatively led into the more strenuous and tension-filled but more peaceful and beautiful middle way between the two great negational temptations.

Abandoning the people and the devil take the hindmost 300 million-plus is a poor way of loving one's fellow creatures so beloved by God that in them He commands us to see His very self. Obey the commandments (Matthew 22:37-40) with discernment, patience, discipline, humility, loving-kindness, and long-suffering, and we can have "nice things" like technological advancement and flourishing communities and so forth. Seek ye first the kingdom, and the rest will follow.

Seek ye other stuff — such as a simulation so powerful that there, all experience and memory of Father, Son, and Holy Spirit are obliterated, and the rest will follow from that.

Start in your heart

This is emphatically NOT about attacking or debunking or destroying other faiths, doctrines, ideologies, wishes, passions, agendas, or anything else. The time has come to deny pride of place to blaming the other instead of the self, to fixating on what the other says or does instead of what lurks within and issues forth from one's own heart (Matthew 15:18-20).

This is about the urgency of taking up the calm and quiet invitation to pursue an active, affirmative path that unlocks the kind of future so many thirst for and even say they want. It's so simple. It doesn't require anything of us that can't be done by just about any person, regardless of place or time. I like to "joke" that "Interstellar" is a movie about how men will literally shoot themselves into a black hole instead of going to church. It's not a joke, of course. That temptation, right to the very limit of sanity and imagination, is always there somewhere, lurking in our hearts, ever since the Fall.

Drawing near to your fellow man, drawing near to God, is often painful, scary, "destabilizing," unpredictable, laborious, costly, and hard to explain or even understand in hindsight. Yet it is essential — it is of the very essence of who we are.

No attempt to escape or replace this experience, no matter how grandiose, all-consuming, or incomprehensible, can lead us to any solution to our deeply human problems, especially in a golden age, where some such problems not only persist but grow acute: monstrous, menacing, overwhelming, to the point where we must realize, as we must realize now, that where we are going there are no solutions, only salvation — not by any merely human creation, but by our all-good, holy, and life-giving Creator.

'Lipstick on a pig': How printing cash is destroying America — and crypto could be next



Decisions made in the 1970s may still be affecting the average American's ability to buy a home.

When the United States used gold as a standard for backing its currency, it acted as a limiter on money creation, capping the amount of currency that could be printed.

'You have one year. One year. I don't give a damn. I don't care if you go bankrupt.'

According to currency expert and author Paul Stone, severing the U.S. dollar from gold in 1971 allowed for unlimited money-printing, immediately devaluing Americans' savings while causing the unfettered spiraling of housing prices.

"The best way for everyone to understand gold standard ... is it's just a limiter," Stone told Return in an interview. "They raised gold's price to 35 bucks an ounce, which immediately diluted your savings by 40%. ... That's evil. When the government fixes its problems or addresses them at our expense, that's evil."

Likening uncontrolled money-printing to making "cotton candy out of thin air," Stone told Return that the government has continuously doubled down on creating a false financial-energy system that causes stress and burden where it need not be.

Nowhere is this more apparent than in housing costs.

RELATED: Jerome Powell proves the Fed’s ‘independence’ is a myth

Alex Wroblewski/Bloomberg via Getty Images

Contrasting the median price of a home in 1970 ($23,000), Stone said today's average of $420,000 should be around $56,000-$70,000 if it were not for inflation caused by money printing.

Printing "us" out of debt was continuously perpetuated, Stone explained, all the way through the Bill Clinton administration, which "made fractional lending happen."

Stone explained that with fractional lending, banks were allowed to lend 10 times the real amount of their money, which flooded the market with nonexistent capital. With that much money floating around, and an additional 1,000% spending power, the money directly inflated real estate pricing.

"So the price goes from what we think it ought to be ... to $420,000 grand?!" he laughed, disappointedly.

When Stone was asked whether or not cryptocurrency, or perhaps specifically Bitcoin, was a way to circumvent inflation and make real capital gains, Stone identified that the method of currency is not the problem, but rather it is the user.

RELATED: I went to El Salvador to see if the country really gave up on Bitcoin

Your browser does not support the video tag.

"The reason there's a ton of crypto is we're brilliant creations," Stone theorized. "And so people started to sense issues with government money. So they created non-government money. And of course the government has the power to get on top of that. And now it's all just lipstick on a pig."

The currency, whether crypto or fiat, will continue to be devalued and spiral out of control if the government does not change its core thesis, the author continued. "You can rename the dollar to some other name and it's still worth three cents and you're still printing money to pay your bills and you're still killing the currency. There's no way out of this."

His radical solution? "Stop printing a dollar. Literally start back and just bring reality in as it kicks our ass," Stone bluntly stated.

Additionally, Stone said that his "drastic" solution would include telling all U.S. corporations that they have one year to stop manufacturing outside of the country.

"You have one year. One year. I don't give a damn. I don't care if you go bankrupt. The country is practically dead financially."

At the same time, he suggested a focus on state power and urged young Americans to vote with their feet and attempt to create an insulated environment in an affordable place. This sort of devolution revolution involves citizens not paying for what the federal government could not pay for if it weren't for money-printing.

Stone urged, "The number-one solution to all this is you either move to a place where what you earn overwhelms your bills better or the government stops printing. ... Move to a small town."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

These new smartphone cameras make spying on you as easy as point and shoot



Most smartphones come with decent cameras these days, complete with customizable modes, color filters, and basic zoom capabilities. However, recent advancements in camera zoom technology make it easier than ever for someone to spy on you or your family from afar, both in public spaces and from the comfort of your own home.

Hybrid zoom is here, and it’s everywhere

The cameras on most smartphones today feature two zoom technologies. Optical zoom uses the focal length of the camera lens to magnify a subject. In other words, it can only zoom in as far as the lens physically allows. Digital zoom expands the zoomable distance of the optical lens by cropping the image even closer. Although this neat software trick can dramatically increase the zoom distance on your phone, zooming in too far will make a photo grainy or blurry.

Innovations like hybrid zoom will continue to chip away at basic rights like personal privacy and security.

While zoom technology is nothing new, hybrid zoom is a newer concept that combines optical and digital zoom to produce more enhanced photos, and it’s sweeping through the smartphone market. For the first time, all three major phone manufacturers in the United States now make flagship phones that can snap photos and videos up to 40x-100x away, far beyond the standard 3x-5x zoom length found on most phones prior to 2020. Now that these devices are widely accessible, privacy and long-distance surveillance are major concerns that all Americans should keep in mind.

Pro Res Zoom on Pixel 10 ProKeynote slideshow by Zach Laidlaw

Watch out for these flagship phones with hybrid zoom

Samsung was the first to add “Space Zoom” to its Galaxy lineup back in 2020. You may have seen photos of Samsung users taking very detailed pictures of the moon, and although it was later confirmed that Samsung used AI to craft some of these images, Space Zoom is still effective at snapping close-up shots here on Earth. Even five years later, the latest Samsung Galaxy S25 Ultra takes some of the most impressive zoomed-in photos, with up to 100x the lens focal length.

RELATED: Don’t upgrade your iPhone to iOS 26 until you know about this trick

Photo by Cheng Xin/ Getty Images

Google and Apple also joined the fray this year. The Google Pixel 10 Pro series received a telephoto lens with 100x “Pro Res Zoom” that’s further enhanced by AI to take incredible photos and video from far away. As for Apple’s iPhone 17 Pro series, these phones received the longest telephoto lens ever put into an iPhone, with up to 40x zoom capabilities.

While each phone has its drawbacks — Samsung Galaxy faked its moon photos, Google Pixel’s AI can sometimes distort a zoomed-in image, and Apple’s iPhone doesn’t get as close as its competitors — hybrid zoom is now a basic feature on all three major flagship devices.

Should you worry about hybrid zoom? Here’s what it does and doesn’t do.

Now that hybrid zoom is widely available, it’s easier than ever for someone to snap photos or videos of you from a distance, without your knowledge or permission. A stranger can see where you are and what you’re doing at any given moment.

Even worse, hybrid zoom is versatile — someone could use it to spy on you at a park, in the grocery store, or while driving your car, and it can even peek through the front window of your home. It has the power to breach your personal privacy almost everywhere.

Luckily, hybrid zoom doesn’t enhance the microphone on the host device. So while someone can photograph you from a distance, he can’t hear what you’re saying, unless he gets close.

Quick tips to protect yourself from hybrid zoom

Even if a stranger can’t hear you, he can learn a lot about you by taking photos and videos. Here are some quick tips to stay safe.

  • Close your blinds and curtains, especially at night. No one can zoom into your window if it’s covered.
  • Never leave your phone, tablet, or laptop on and unattended in public. Be especially careful when using privacy-sensitive apps, like banking, investments, etc.
  • Always use biometrics (fingerprints or facial recognition) to log in to your devices and webpages. This way, no one can watch out for your passcode or login passwords.
  • Don’t leave your credit card out on a table at a restaurant or anywhere it can be photographed.

Knowledge is power

As consumer technology evolves faster than ever, new innovations like hybrid zoom will continue to chip away at basic rights, like personal privacy and security. None of us can stop them from coming, but awareness makes it easier to keep you and your family safe on both sides of your front door.

Zuckerberg's vision: US military AI and tech around the world



Mark Zuckerberg's Meta is sharing the wealth with U.S. allies in Europe and NATO.

Since late 2024, Zuckerberg's tech giant has made Llama — its own large language model — available to foreign countries within the Five Eyes security partnership between the U.S., Australia, Canada, New Zealand, and the United Kingdom. Now, Meta is expanding the access to other countries while partnering with advanced-AI military contractors.

'We're building for completely on-device deployment of AI.'

Wearable products, AI programs, and other tools are being shared with allies in France, Germany, Italy, Japan, and South Korea, in order to enhance "decision-making, mission-specific capabilities, and operational efficiency," Meta wrote.

The technology includes a partnership with Anduril, Palmer Luckey's industry-leading augmented reality defense company.

Calling the effort the "largest of its kind," Meta's partnership is meant to equip soldiers with enhanced decision-making capabilities. This is apparent with Anduril's recently released EagleEye, an AI/AR warfighter helmet.

RELATED: 'Swarms of killer robots': Former Biden official says US military is afraid of using AI

EdgeRunner AI is used on a military laptop. Image provided to Blaze News courtesy of EdgeRunner

EagleEye represents the best of what the video game world has to offer, brought to life.

Not only does the helmet display directional mapping as if belonging to a gamer dropped into a first-person shooting game, but it also provides a form of X-ray vision that allows users to see allies and enemies on the map through coordinated data.

The AR tech also utilizes spatial audio and frequency detection to alert operators of hidden threats. Rear and flank sensors also ensure that the allied soldier is not ambushed.

Anduril's Lattice AI is also making waves, and it too looks like something gamers will recognize.

Using data from drones, sensors, and satellites, it creates a real-time 3D battlefield map. The program boasts a wide range of deployable formats, including detecting battlefield threats or intrusions on border security.

In November 2024, Meta opened-sourced its Llama model for the U.S. military and its contractors to build upon. That move is now paying off, as Meta will now share what the company EdgeRunner has built, a closed-ended chatbot for soldiers.

RELATED: 'Insane radical leftists' are gone: Zuckerberg and Palmer Luckey reunite for US military project

Anduril Lattice battlefield software. Photo by John Keeble/Getty Images

EdgeRunner AI is essentially a search function for soldiers; it can be run as a local program on almost any consumer-grade device, and according to Meta, it can be used to identify safe locations for aircraft or even accurately translate languages.

"This is all part of our joint effort to ensure the warfighter has access to advanced AI technology at the tactical edge," an EdgeRunner spokesperson told Return. "What's especially unique about our work with Meta is that we're building for completely on-device deployment of AI, meaning it's running locally on your laptop, workstation, or smartphone, disconnected from the cloud."

This method avoids the necessity for uninterrupted cloud connectivity, which helps keep the data out of the enemy's hands, too.

The AI program has an all-encompassing goal and is specifically designed to be adaptable to different job titles. This means it will be coupled with logistics, maintenance, and combat roles.

Meta is spreading its footprint worldwide and said because of this, it hopes allies will deploy the AI ethically, responsibly, and in accordance with "relevant international law and fundamental principles."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

The government's anti-drone energy weapons you didn't know existed



Drone defense systems are far more developed than the average person may be aware of.

In December 2024, there were estimates of over 5,000 reports of drone sightings off the East Coast of the United States, including huge clusters in New Jersey.

'We're going to start to see the increasing development of ... directed-energy weapons or high-powered microwave systems.'

Despite the mass confusion surrounding the mystery drones, citizens were told there existed no national security threat and even, at times, that what they saw was probably just a plane.

However, the Joe Biden administration did not seem to let Americans know at the time that the Department of Defense (now Department of War) is far more equipped to handle drone swarms than is commonly understood.

This was made apparent by Jake Adler, the biotech entrepreneur behind the clay-based hemostatic Kingsfoil. The young businessman revealed to Blaze News that drone warfare has prompted the use of direct-energy weapons that are being quickly developed to lower defense costs.

The ongoing threat has resulted in a type of "escalation tax," Adler explained, in which the constant use of drones has necessitated the creation and deployment of cheaper defense mechanisms.

Adler referred to companies like Allen Control Systems, which have taken massive strides in developing new methods of knocking drones out of the sky. Some companies are even using microwave technology.

RELATED: Chinese informant allegedly alerted FBI to Wuhan lab leak in early 2020: Report

A RADIS radio detection intervention system of the German armed forces. Photo by TOBIAS SCHWARZ/AFP via Getty Images

ACS' Bullfrog system is fairly simple: an autonomous weapon targets unidentified aerial vehicles and blows them away with high-caliber rounds, all while being exceptionally portable, at just 165 pounds for some models.

Then there's Epirus, which offers "long-pulse high-power microwave systems with AI and advanced electronics to protect and sustain civilization."

Simply put, Epirus uses energy weapons to neutralize dozens of drones at a time and has successfully completed trials in which it took out 49 of 49 and 61 of 61 targets successfully and simultaneously.

"We're going to start to see the increasing development of countermeasure systems coming from companies like Eperis, which are doing directed-energy weapons or high-powered microwave systems," Adler noted. "So we're kind of seeing the development of novel platforms that can more effectively knock down, you know, a hundred drones for five cents."

The use of these systems tied into Adler's broader point that the neutralizing of drone threats forces a reliance on human fighters.

RELATED: The government is monitoring your feces — to protect you, of course

A Chinese drone used by Polish Army soldiers during a training exercise. Photo by Artur Widak/NurPhoto via Getty Images

Adler's company Pilgrim has been focusing on bolstering soldier capability in the battlefield, and in addition to its medical technology, he has long looked to target another sensitive area: sleep.

'Warfighters have really bad sleep," Adler offered. "A great deal of them are sleep-deprived. One of the challenges is that you're taking an 18-year-old ... and putting them into a highly stressful environment where the expectation is, realistically, very limited sleep. And that's sort of around the age where sleep patterns are still getting reinforced, right? So you're kind of disrupting the natural evolution or really the natural growth of the brain, which can kind of create challenges around combat effectiveness [and] accuracy."

This "laundry list" of externalities that are affected by sleep are on Adler's to-do list, and he has looked to get away from the use of pharmaceuticals (stimulants and sedatives) in order to tackle those issues.

Through a previous project called NeuSleep (now officially on pause), Adler had soldiers use a sleeping mask equipped with brain stimulation and monitoring devices for heart rate, blood oxygenation, and sleep stages. The device would stimulate the brain to modify sleep patterns, allegedly making three-hour naps feel more like five or six hours of sleep.

"We'd be able to monitor if you were in REM or if you're in light sleep. ... We could basically shock you and improve your sleep quality. The joke that we had internally was that we were shocking people to sleep, which didn't really get very far in terms of marketing," he laughed.

Adler, like many others, solidified the idea that the Trump administration has placed increased emphasis on developing its network of companies that place high importance on advanced technologies for the individual and treat the soldier as the focus.

Companies like Pilgrim, Anduril, and EdgeRunner AI are moving at light speed, and the general populace is blissfully unaware. Systems that are in place to protect citizens are now under scrutiny from young entrepreneurs who have signaled that a lot of military and defense tech is slow-moving or out of date, and they want to do something about it.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Camp of the H-1B Saints



Jean Raspail’s "The Camp of the Saints" is one of those books you can’t mention at a dinner party without setting off a minor war. It’s been denounced, suppressed, and maligned as a hateful screed. And yet half a century after its publication, the book still pops like a gunshot. Why? Because it asks the question that polite society has done its best to avoid: What happens when a civilization loses the will to guard its own front door?

The novel is a fable, a satire, and a warning all at once. The plot is blunt: A massive flotilla of migrants sails toward France from India, while Europe’s leaders wring their hands, draft statements, and find ways not to act. The cast is drawn as caricatures — professors, journalists, bureaucrats, priests — each one a stand-in for the institutions that once anchored Europe but now serve as props for its decline. Raspail spares no one.

The poor on the ships are less his subject than the powerful on shore, who offer nothing but dithering and moral preening while their house is overrun. The book is brutal, unsubtle, and deliberately offensive. But it is also piercing, because it forces the West to confront its soft underbelly: its allergy to boundaries, its addiction to slogans, and its inability to say “no.”

If we want to preserve a middle class, we must demand that corporations train and hire our own graduates before importing replacements from abroad.

And it is why, surprisingly enough, it has something to say about our current debates over H-1B visas and high-tech immigration. Raspail describes hordes of the destitute; the H-1B program is designed for highly skilled engineers, scientists, and doctors. But dig deeper than the press releases, and you find the same theme: institutions playing make-believe, telling one story to the public while the true story unfolds in reality.

When the H-1B program was created, the pitch was simple. America, the world’s technological powerhouse, occasionally needs access to rare and exceptional skill sets. If a rocket company needs an aeronautical genius from Stuttgart, or a cancer lab needs a researcher from Mumbai, the law allows a narrow pipeline. The point was never to replace American workers but to supplement them, filling critical gaps while American talent pipelines caught up.

The reality, though, is something far different. Today, the H-1B program is dominated not by Nobel-caliber minds but by giant outsourcing firms and labor brokers who game the lottery system. They flood the application pool with tens of thousands of petitions, scoop up a massive share of the slots, and then rent those workers back to American companies at cut-rate wages. The result is not a pipeline for the “best and brightest,” but a labor arbitrage racket that undercuts American graduates while enriching a handful of consulting firms.

Even the most prestigious American firms have been caught using the H-1B program to displace their own workers, sometimes requiring those workers to train their replacements before letting them go. It’s the sort of ritual humiliation that would have made Raspail nod grimly: a civilization too weak to defend its own workers in its own labor market.

RELATED: Jean Raspail’s notorious — and prophetic — novel returns to America

Photo by Pascal Parrot/Sygma/Getty Images

Instead of nuclear physicists and neurosurgeons, we see armies of mid-level coders and IT staff — exactly the sort of roles American universities and trade schools could produce en masse if companies invested in them. Instead, corporations cut costs by importing cheaper labor, then spin it as a story of global competitiveness. The rhetoric is lofty; the practice is tawdry.

Here is where Raspail’s cold mirror matters. In his novel, Europe’s leaders never call things by their proper names. They drown reality in euphemism. The same is true today. Politicians and CEOs alike sell H-1B as a meritocratic jewel box, while insiders know it has become a vehicle for mass importation of mid-tier labor at discount prices. The tech lobby, one of the most powerful in Washington, spends lavishly to ensure that every attempt at reform is softened, delayed, or gutted. And so the system persists: a Potemkin policy that serves shareholders at the expense of citizens.

A visa program that actually admitted only the truly exceptional — the researcher on the cusp of curing a disease, the engineer pioneering a new material — would be defensible. A program that functions as a corporate back door for cheap labor is not.

Raspail also reminds us that admission is not an end, but a beginning. Those who come on visas should be expected to adopt the language, the civics, and the loyalty that make one a part of the American project. This is not cruelty; it is hospitality with standards. But when the bulk of visas are funneled through outsourcing firms, newcomers are less citizens-in-waiting than contract labor in transit, beholden not to America but to their sponsoring firm. That is not how you build a nation. That is how you hollow one out.

The truth is that the H-1B program, as currently run, is less a gate than a hollow archway — grand in appearance, flimsy in substance. It is sold as a crown jewel of American competitiveness, but in practice, it erodes wages, weakens training incentives, and mocks the idea of meritocracy. It is the sort of policy that Raspail would have recognized immediately: a symbol of a civilization that cannot even defend its own professionals in its own industries.

The armada in "The Camp of the Saints" is fiction, exaggerated and harsh. But the deeper theme — the failure of nerve, the surrender of sovereignty, the refusal to tell the truth about what is happening at the gates — is all too real. Today, it is not fleets of the poor but paper armies of visa applications, filed by corporate giants and labor brokers, that wash up at our shores. And our leaders, much like Raspail’s, prefer to hide behind euphemisms rather than face what they’ve allowed.

Literature earns its keep when it clarifies the stakes. "The Camp of the Saints" does not flatter; it does not console. It strips away illusions and forces us to see how quickly a civilization can collapse when it forgets to defend itself. Our immigration debate, particularly around H-1B visas, is in desperate need of that same clarity. If we want genuine excellence, we must close the scam pipelines and admit only those whose skills are verifiably rare and indispensable. If we want to preserve a middle class, we must demand that corporations train and hire our own graduates before importing replacements from abroad. Raspail’s novel insists on candor. It shows what happens when a nation replaces hard choices with soft lies.

Citizen outcry blocks a Microsoft data center, making AI an acid test for local government



While Microsoft just scrapped plans for a massive data center in Caledonia, Wisconsin, due to local pushback regarding environmental concerns, a multitude of other data center construction projects riding the general push to terraform the modern human environment in the U.S., and abroad, are proceeding apace.

“Based on the community feedback we heard," Microsoft said in a statement reported by the Milwaukee Journal Sentinel, "we have chosen not to move forward with this site.” The community feedback, however accurate, was filtered through several layers of local and regional government zoning bodies including Caledonia Plan Commission, which is advising Microsoft go ahead with a separate data center nearby. The second site occupies 244 acres and would see the compound situated near a local power plant.

The push toward more and more power is one of several critical environment components in the seemingly endless project to expand data centers everywhere. Increasingly, we’re seeing tech giants like Microsoft and Google locate projects near existing power plants or just opting to build their own on site. The strain on the grid is reflected in surging electrical rates around the U.S.

People can still have some sway ... if they can get informed and insert themselves into local discussions.

If we take Oregon as an example, we see some interesting and contradictory trends. On the one hand, Oregon has long prided itself at the citizen and local-government level on "doing the work" to ensure some reasonable environmental protection. It hasn’t been a total success; citizens and small businesses have bent over backwards since the 1970s to make accommodations. Isn’t it curious, then, with respect to the question of who pulls the strings in the state, to observe that electrical rates for most citizens have gone up 50% in the last few years? That price hike will continue. Estimates vary, but it appears that Oregon is devoting approximately 11% of its power generation to big tech data centers.

RELATED: Taliban accused of shutting off internet to 'prevent immorality': 'An alternative will be built'

Photo by Mohsen Karimi/Getty Images

We’ve written about terrifying water consumption surrounding data centers. The numbers are difficult to pin down, but even moderate estimates show the centers running through enormous amounts of fresh water. What goes a bit undiscussed are the chemical residues inherent to data center operations, and here again, the push to more tech and more cash leaves little chance for scientists to get a handle on the various impacts — human, animal, and long-term environmental, including life cycle.

The search, such as it is, for a balance between industrial processes and environmental regulations has never quite worked. We probably shouldn’t hold much hope regarding the particularly disturbing chemical output of so-called PFAS that's native to data center operations. These are the so-called forever chemicals: “Pfas are a class of about 16,000 chemicals most frequently used to make products water-, stain-, and grease-resistant," the Guardian recently noted. "The compounds have been linked to cancer, birth defects, decreased immunity, high cholesterol, kidney disease, and a range of other serious health problems.”

PFAS are present in data centers. No one agrees just how much. We know the water and gaseous outputs of the operations will go somewhere, for good or for ill. And politicians know that, just as with previous industrial-environmental disasters, they’ll likely be moved on through the revolving gov-corp-media door by the time the real bill comes due.

Invisible PFAS didn't quite make the cut in "Eddington," Ari Aster's stinging satire of the local politics of big data centers, but they're the icing on a disturbing cake served up to towns all over America: Colossal flows of fiat cash swamp the interests and voices of citizens so divided in ideology that they can't mount a coordinated pushback. If you throw enough money at local officials, they’re going to give in. The AI boom has seen capitalization like never before, so there’s plenty to paper over pesky environmental regs. As shown in Caledonia, however, people can still have some sway ... if they can get informed and insert themselves into local zoning, impact, building, and resource discussions.

Kentucky sues Roblox over Charlie Kirk 'assassination simulators'



Kentucky Attorney General Russell Coleman has alleged that online gaming platform Roblox has not protected children from abhorrent content.

Coleman filed a lawsuit on Monday, claiming that Roblox has allowed minors to be exposed to "animated bloody" content surrounding the assassination of Charlie Kirk.

'We constantly monitor all communication for critical harms.'

The lawsuit, posted by Fox News Digital, accuses the massive online gaming community of operating under insufficient guardrails in terms of denying children access to certain materials. This includes violence, sexually explicit materials, and alleged "Charlie Kirk 'assassination simulator[s].'"

Blaze News previously reported that as of Q2 2024, Roblox had a claimed 79 million active daily users, an increase of almost 15 million from the same time in 2023. This included approximately 58% of its user base being under 16 years old, which equates to at least 46 million children.

The alleged assassination simulators "began popping up on Roblox, allowing children as young as 5 years old to access animated bloody depictions of the September 10 shooting," the lawsuit stated.

Roblox could easily "require users to verify their age and their parents' consent by virtually any mechanism, including merely asking for these data," the legal document continued. "Doing so would create at least some restriction on the content available to users under 18 years old."

RELATED: 'Ginger ISIS member' has terror plot thwarted by Roblox user: 'I cannot agree with the term terrorist'

- YouTube

"As such, child predators can — and do — establish accounts to pose as children," Kentucky wrote.

In response to the lawsuit, a Roblox spokesperson told Blaze News that the company welcomes the opportunity for a direct conversation with the Attorney General about the topic. However, the company also said that some of the parties involved are seeking financial gain.

"The attorney general's lawsuit is based on outdated and out-of-context information," Roblox said. "We believe together we can increase safety not just on Roblox, but on all platforms used by kids and teens. The AG's office is partnering with plaintiff's attorneys, who we believe have misrepresented matters to seek financial gain."

The spokesperson added, "Roblox has taken an industry-leading stance on age-based communication and will require facial age estimation for all Roblox users who access our communications features by the end of this year. Roblox does not allow image sharing via chat, and most chat on Roblox is subject to filters designed to block the sharing of personal information. We constantly monitor all communication for critical harms and swiftly remove violative content when detected and work closely with law enforcement."

Roblox pointed to more information about its efforts to implement age verification, which undoubtedly would confirm a user's age, but also could deter platform usage altogether.

This includes verification through selfie-videos, the aforementioned "facial age estimation," ID, or verified parental consent.

RELATED: Kids 'cosplaying as ICE agents' and performing raids on 'illegals' in Roblox game

— (@)

The sheer volume of Roblox users makes any enforcement incredibly difficult to pull off without pre-existing barriers to entry, monitoring, or filtration systems. This brings up further issues surrounding digital ID, including, for example, the exposure of children's likenesses.

At the same time, gamers are constantly finding new ways to develop ridiculous scenarios on the platform, such as performing ICE raids or in-game protests. There also exists the threat of bad actors grouping together to discuss crimes or make terror plots.

Roblox told Blaze News that it includes rigorous text chat filters to stop inappropriate contact with minors.

Additionally, the company said that while it started as a "platform for children," 64% of the user base is now over 13 years old.

Blaze News did find several videos on YouTube appearing to be re-creations of Kirk's assassination within the video game.

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!