How the laptop revolution destroyed public education



A recent Fortune magazine article made waves with a grim admission: After more than $30 billion spent flooding classrooms with laptops and tablets, standardized scores keep sliding. Worse, neuroscientists now link more classroom screen time to lower performance. The device meant to modernize learning may be helping to unmake it.

Schools rushed into a technological revolution without asking the most basic question: What does this do to a child’s mind? Many teachers saw the answer firsthand and in real time. Administrators and “experts” ignored them because the fad sounded like “progress.”

A concerted push to remove screens from classrooms needs to begin now. Put the devices where they belong: limited tools, not the center of learning.

I taught history and civics in Florida public schools as the laptop trend took hold. Computers had sat in classrooms since my own childhood, but they played a supporting role. A few desktops in the back helped with research. A computer lab handled bigger projects. Most learning still happened on paper with books, notes, and conversation.

Then the Chromebook arrived: cheap, durable, limited, and perfect for one thing — living inside a web browser. Suddenly a district could put a machine not just in every room but in the hands of every student.

Buzzwords beat judgment

Public-school administrators love buzzwords. “Technological literacy” sounds noble, as if every ninth grader is training for Silicon Valley while working on their grammar assignment. Google did not just sell discounted laptops. It supplied a full ecosystem: Docs, Sheets, Slides, Classroom. The whole apparatus of schooling migrated into Alphabet’s software suite. Few people in the system asked why a private company wanted to become the operating system of childhood.

The laptop push also fit the religion of metrics. District offices love anything that produces dashboards, timestamps, and “engagement” graphs. A worksheet completed on paper frustrates the spreadsheet priesthood. A worksheet completed on a Chromebook generates data. The device did not just enter the classroom; it entered the managerial imagination, where metrics matter more than minds.

Once laptops became ubiquitous, the problems announced themselves. The deeper the integration, the harder it became to control.

Cheating became routine. Students searched answers in seconds. The larger problem went beyond quizzes. Googling replaced thinking. Kids refused to read because they assumed a quick search and a copy-paste counted as “learning.” Wikipedia became the default authority. Students stopped vetting anything because they treated the first search result as truth. Even writing shifted. Instead of building an argument, students stitched together paragraphs from the internet and hoped the teacher felt too tired to fight.

RELATED: The world changed, and now we homeschool

Cemile Bingol via iStock/Getty Images

The distraction machine

Schools tried parental controls. Teenagers treated those controls as a challenge. When thousands of bored adolescents share a building, they collaborate. A new filter went up; within days, kids found a workaround. Soon the screens again showed games, movies, even pornography — during class, in plain view, behind a pretense of “work.”

Students used shared Google docs as a covert messaging system. They gossiped, bullied, and planned actual crimes while keeping a document open to look studious. My school eventually held assemblies to remind students that everything typed into a document leaves a record and that bragging about criminal activity or sexual escapades can end up as evidence.

All of that raised another issue: privacy and capture. Google did not subsidize devices and software out of corporate charity. By making Google search and Google apps the center of a child’s information life, the system trained dependency. Google finds the truth. Google organizes the truth. Google presents the truth. A student’s education happens inside a Google ghetto. Pretend the company is not collecting that data if you want, but the incentives cut the other way.

Screens also fed the attention crisis. Administrators told teachers to stop showing videos longer than three minutes without pausing to explain because students could not stay focused. The device that was supposed to expand horizons kept shrinking attention spans. Teachers began competing with the entire internet for a child’s attention, and no lesson plan can win that contest for long.

Locked into the system

The system made escape difficult. Florida went all-in on Chromebooks and tied them to everything. Standardized tests moved entirely onto laptops. “Test prep” software got woven into daily coursework. Students with accommodations or limited English got pushed toward the device as a universal crutch. Denying a Chromebook got treated as denying an education. Teachers who resisted risked discipline.

I reached a point where my students mattered more than compliance. I rebuilt my classroom around paper, books, and discussion. Students used Chromebooks only for mandated testing and accommodations we could not meet otherwise.

The shift showed results fast. Students engaged more. Distraction dropped. Discipline improved. More assignments got finished. Grades rose.

Then COVID-19 struck.

RELATED: America’s new lost generation is looking for home — and finding the wrong ones

Wavebreakmedia via iStock/Getty Images

Remote learning turned the screen into the classroom itself. Even Florida, which resisted lockdown hysteria, shifted much of schooling online. Learning fell off a cliff. The lockdowns devastated achievement, but the damage did not end when students returned in person. After COVID, it became nearly impossible to pry students, parents, and administrators away from screen-based schooling. Digital integration became mandatory. No exceptions.

Now the corporate press arrives to play cleanup. Reporters discover the failure well after the money has been spent, the infrastructure has hardened, and a generation has been trained to treat a browser as a brain.

A way back

Public education is stuffed with managerial drones who chase consensus and trends while ignoring what helps students. The bureaucracy will keep this program alive through sheer inertia even as evidence piles up. Parents and lawmakers need to force a reset: paper-based instruction as the default, screens as a tightly limited accommodation, and tests that reward reading and writing instead of clicking. Districts should stop outsourcing childhood to Big Tech, stop laundering ideology through “digital citizenship,” and start treating attention as a scarce resource worth defending.

A concerted push to remove screens from classrooms needs to begin now. Start with elementary grades. Bring back books. Bring back handwriting. Bring back sustained attention. Put the devices where they belong: limited tools, not the center of learning.

Kids learn slower, but they learn for real.

How Developers Are Making AI Your Kid’s Third Parent In The Classroom

The CEOs of Anthropic and OpenAI admit AI is like a parent nobody can resist, while teachers unions support Big Tech’s rule.

Can computers really make up for everyone getting dumber?



We have recently seen a renaissance of the terminal, a return to a mode we thought we had left behind.

Tech is associated with perpetual progress. What explains this seeming regression?

In computing, the new is usually synonymous with the sleek and the visual. The resurgence of the command-line interface, the text-based terminal with its blinking cursor on a monochrome screen, is therefore a development both unexpected and revealing. Developers who spent decades in the comfortable, pixelated embrace of graphical user interfaces are turning to minimalism. This turn is not merely a retreat into nostalgia or a quirk of programmer preference; it is a shift in the cognitive geography between human and machine under the influence of AI.

We find ourselves in a 'man-computer symbiosis.'

The heart of this revival is the emergence of CLI-based AI agents. These are harnesses for large-language models capable of processing language, writing code, and executing tasks. They have transformed the terminal from a niche tool for the specialist into a versatile assistant for the layman.

The CLI is quite a different medium from the GUI. While a GUI is spatial and image-driven, the terminal is rooted in language and sequence. We issue commands to achieve practical ends, a mode of thought that encourages a logical, sequential engagement with the world. We find ourselves in a “man-computer symbiosis,” as J.C.R. Licklider imagined in the 1960s, a partnership where the computer frees human intelligence from the drudgery of mundane tasks. The new AI agents handle the keystrokes and complex syntax, allowing a user to manipulate data as if using a “second brain” integrated directly into his workflow.

Sound familiar?

The dream of the automated servant is as old as myth. In the "Iliad," Homer describes the “golden handmaidens” of Hephaestus, endowed with movement and perception, who assisted the god at his forge. Aristotle speculated on a world in which the shuttle might weave without a hand to guide it, eliminating the need for human servitude. For most of history, these possibilities remained fantasies. When computers finally arrived in the mid-20th century, they were indeed programmable servants but esoteric ones, requiring punch cards or green-and-black text terminals.

By the late 1980s, the mouse-and-icons paradigm of Apple’s Macintosh and Microsoft Windows increased the accessibility of computing. The GUI was more intuitive, an interface that did not require you to memorize arcane commands. The general public grew accustomed to clicking buttons, and the terminal was relegated to the realm of system administrators and developers. The comeback of the terminal in the mid-2020s is therefore significant. The terminal has become the stage where an AI that writes and runs code could operate with freedom. We are returning to Aristotle’s vision: Every user now potentially has a digital apprentice.

LLMs are designed to handle text, and the terminal presents the computer’s functions in exactly that form. The CLI is a universal interface, a lingua franca that allows an AI to interact with digital tools without the difficulty of navigating pixel-based GUIs meant for human eyes. Command-line tools possess a Lego-like composability; they can be chained and piped together in ways that GUI applications rarely allow.

An AI agent residing in the terminal benefits from a unified environment with low friction, with no need to hunt through menus; it calculates, types, and executes. Developers are moving away from previously dominant integrated development environments to these command-line agents. Whether OpenAI’s Codex CLI, Anthropic’s Claude Code, or community-driven projects like OpenCode, these platforms share a core mechanism: a conversational command-line where the AI interprets instructions and takes actions on the user’s behalf.

RELATED: Digital BFF? These top chatbots are HUNGRIER for your affection

peshkov / Getty Images

The effects are immediate and striking. Tasks that once required specialized training, such as querying databases, deploying websites, and analyzing logs, are now performed by marketing teams and graphic designers who simply ask the agent to do it. Natural language has become a new programming language for the many. There is a rise in “conversational computing” with a “text-first” ethos, a digital minimalism that values the intentionality of a text window over the cacophony of apps and notifications. The terminal also becomes a learning environment: Because the AI explains the commands it generates, a novice can pick up understanding that a closed GUI would hide.

Outsourcing problems

Yet this shift brings its own set of concerns. When AI tools handle the details, what do we lose? We face the risk of simulated competence in which people “seem to know much, while for the most part they know nothing,” as Socrates described those reliant on writing. Just as writing externalized memory, these agents externalize problem-solving. There is the danger of de-skilling, of losing the ability to troubleshoot or understand underlying concepts, if the AI always mediates the complexity.

The hope, of course, is that these tools will let us transcend previous limitations. By automating the drudgery, they might unleash more creativity. The terminal is less anthropomorphic than a voice assistant; it remains a text-based workspace in which the human and the computer engage in a loop of iterative help. The CLI renaissance suggests that looking back to older paradigms, such as text over graphics, can better move us forward. Language is the universal interface of knowledge and may now become the universal interface for action. Whether we use this return to cultivate deeper skills or merely as a productivity hack will shape the society we make. We are left to decide whether we will be sedated by convenience or inspired by new frontiers of art and knowledge.

Joe Rogan stuns podcast host with wild new theory about Jesus — and AI



Comedian Joe Rogan praised Christianity as a faith that really "works," calling biblical scripture "fascinating" during a recent interview.

Rogan also touched on what he thinks the resurrection of Jesus Christ would look like, a viewpoint that was met with criticism by host Jesse Michels.

'You don't think that He could return as artificial intelligence?'

On an episode of "American Alchemy," Rogan cited the Bible when he spoke about how easily knowledge could become mysterious, conflated, or unbelievable when passed down through generations.

"We'll tell everybody about the internet. We'll tell everybody about airplanes. We'll tell everybody about SpaceX; as much as you can remember, you'll tell people, but you won't know how it's done. You won't know what it is. And I think that's how you get to, like, the Adam and Eve story," he said.

After adding that he believes biblical stories are "recounting real truth," the podcaster brought up a question he had clearly been pondering for a while: "Who's Jesus?"

Rogan prefaced that many will disagree with his perspective, but then asked about the possibility that Jesus could be resurrected, in a sense, through artificial intelligence.

"Jesus is born out of a virgin mother. What's more virgin than a computer?" Rogan began. "So if you're going to get the most brilliant, loving, powerful person that gives us advice and can show us how to live to be in sync with God. Who better than artificial intelligence to do that? If Jesus does return, even if Jesus was a physical person in the past, you don't think that He could return as artificial intelligence?"

The host, however, did not accept Rogan's theory.

RELATED: Joe Rogan, Christian? The podcaster opens up about his ongoing exploration of faith

First, though, Rogan clarified, indicting that he doesn't believe artificial intelligence would actually be Jesus but instead that it would serve as the return of Jesus in terms of affect and capability.

"Artificial intelligence could absolutely return as Jesus. Not just return as Jesus, but return as Jesus with all the powers of Jesus," Rogan said. "Like all the magic tricks, all the ability to bring people back from the dead, walk on water, levitation, water into wine."

In response, Michels said Rogan's description sounded like an unwanted "dystopian" future.

Still Rogan argued that the prerequisite for a Jesus-like being could come about due to the human need to improve.

"It's only dystopian if you think that we're a perfect organism that can't be improved upon. And that's not the case," he rebutted. "That's clearly not the case based on our actions, based on society as a whole, based on the overall state of the world. It's not. We certainly can be improved upon."

While the host accepted that perhaps humans could improve morally and ethically, he said that attempts at improving by means of a computer "seems destructive."

RELATED: Joe Rogan says we’re at ‘step 7’ on the road to civil war. Is he right? Glenn Beck answers

Photo by AFP PHOTO/AFP via Getty Images

The conversation flowed smoothly into Rogan's love of Christian scripture, with the 58-year-old saying how joyful his experience has been at his new church.

"The scripture, to me, is what's interesting; it's fascinating," he said. "Christianity, at least, is the only thing I have experience with. It works. The people that are Christians, that go to this church that I go to, that I meet, that are Christian, they are the nicest f**king people you will ever meet."

Rogan gave examples about the polite society he has found himself immersed in, hilariously citing the church parking lot as an example.

"Everybody lets you go in front of them. There's no one honking in the church parking lot. It works," he said.

What Rogan hammered home throughout the conversation was that he finds real truth in what he has read in the Bible. Still he isn't sold on having predictions provided for him about the future; but he is certainly open to it. He described biblical stories positively as an "ancient relaying" of real history and events.

But about the book of Revelation, Rogan said of his pastor, "There's no way that guy telling you that knows that. ... He's just a person. He's a person like you or me that is like deeply involved in the scripture."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Wired In

A trained computational biologist—one who discovers biological truths through simulations rather than physical experiments—Arbesman volunteers as our guide. With software now embedded in our daily routines, he rests uneasily knowing that only the technologically savvy wield all creative potential. He envisions a world in which everyone possesses this power. Thanks to recent advances in generative artificial intelligence, such as ChatGPT, that vision is more plausible than ever.

The post Wired In appeared first on .

IBM to eliminate DEI department, commits to 'viewpoint' neutrality in attempt to reach 'all consumers'



IBM announced a massive investment in the United States just days after it was revealed the company would be committing to political neutrality.

The Heritage Foundation, a conservative activist group, announced that as an IBM shareholder, it filed a proposal with the company to provide a report on what it described as discriminatory diversity-based hiring policies. The group told IBM that it should be recruiting employees without regard to race, gender, religious beliefs, or political affiliation and that it should encourage management and executives to be bias-free in its activities.

Following the foundation's filing and alleged pressure on IBM, the computer company updated its corporate policies. On its website, IBM declared it does not have a political action committee, does not engage in independent or electioneering communications, and does not provide any financial support to political parties or candidates, directly or indirectly.

The company then stated its "media-buying and content policies are audience-centric," are "aiming to reach all consumers authentically, and are viewpoint neutral with respect to political or religious status or views."

'Companies can see that America wants sanity back.'

In a statement obtained by Blaze News, the Heritage Foundation's Andrew Olivastro called IBM's move a "critical step" in restoring "equality, transparency, and commitment to merit in the marketplace."

"The company now has a real opportunity to make good on this commitment and take the lead in setting the tone for the rest of corporate America," Olivastro continued. "IBM needs to make it clear, to shareholders, employees, and customers — that there is no area of its corporate policy in which immutable characteristics like race and gender are prioritized over merit. Full stop."

Activist and anti-diversity, equity, and inclusion proponent Robby Starbuck announced in mid-April that IBM had not only dropped its DEI department, but it would no longer have a diversity council, a diversity-centric podcast, and would not participate in the social credit scoring system of the progressive activist group the Human Rights Campaign.

Other initiatives that were dropped include IBM's "I'm In Allyship" campaign, diversity-based supply chain operations, "Allyship" training, and diversity-based executive compensation.

"Companies can see that America wants sanity back," Starbuck wrote. "The era of wokeness is dying right in front of our eyes. The landscape of corporate America is quickly shifting to sanity and neutrality. We are the trend, not the anomaly."

On Tuesday, IBM unveiled plans to invest $150 billion in the United States over five years in a plan that is intended to fuel the economy. More than $30 billion was dedicated to research and development, Yahoo reported, directed at the enhancement of quantum computing in the United States.

IBM Chairman and CEO Arvind Krishna, a 62-year-old from India, claimed the company has been "focused on American jobs and manufacturing" for the last 114 years.

Krishna added, "With this investment and manufacturing commitment we are ensuring that IBM remains the epicentre of the world's most advanced computing and AI capabilities."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

Why tariffs are the key to America’s industrial comeback



On April 2, President Trump announced a sweeping policy of reciprocal tariffs aimed at severing America’s economic dependence on China. His goal: to reshore American industry and restore national self-sufficiency.

How can the United States defend its independence while relying on Chinese ships, machinery, and computers? It can’t.

Tariffs aren’t just about economics. They are a matter of national survival.

But time is short. Trump has just four years to prove that tariffs can bring back American manufacturing. The challenge is steep — but not unprecedented. Nations like South Korea and Japan have done it. So has the United States in earlier eras.

We can do it again. Here’s how.

Escaping the altar of globalism

Tariffs were never just about economics. They’re about self-suffiency.

A self-sufficient America doesn’t depend on foreign powers for its prosperity — or its defense. Political independence means nothing without economic independence. America’s founders learned that lesson the hard way: No industry, no nation.

The entire supply chain lives offshore. America doesn’t just import chips — it imports the ability to make them. That’s a massive strategic vulnerability.

During the Revolutionary War, British soldiers weren’t the only threat. British factories were just as dangerous. The colonies relied on British imports for everything from textiles to muskets. Without manufacturing, they had no means to wage war.

Victory only became possible when France began supplying the revolution, sending over 80,000 firearms. That lifeline turned the tide.

After the Revolution, George Washington wrote:

A free people ought not only to be armed, but ... their safety and interest require that they should promote such manufactories as tend to render them independent of others for essential, particularly military, supplies.

Washington’s first major legislative achievement was the Tariff Act of 1789. Two years later, Alexander Hamilton released his “Report on Manufactures,” a foundational blueprint for American industrial strategy. Hamilton didn’t view tariffs as mere taxes — he saw them as the engine for national development.

For nearly two centuries, America followed Hamilton’s lead. Under high tariffs, the nation prospered and industrialized. In fact, the U.S. maintained the highest average tariff rates in the 19th century. By 1870, America produced one-quarter of the world’s manufactured goods. By 1945, it produced half. The United States wasn’t just an economic powerhouse — it was the world’s factory.

That changed in the 1970s. Washington elites embraced globalism. The result?

America has run trade deficits every year since 1974. The cumulative total now exceeds $25 trillion in today’s dollars.

Meanwhile, American companies have poured $6.7 trillion into building factories, labs, and infrastructure overseas. And as if outsourcing weren’t bad enough, foreign governments and corporations have stolen nearly $10 trillion worth of American intellectual property and technology.

The consequences have been devastating.

Since the 1980s, more than 60,000 factories have moved overseas — to China, Mexico, and Europe. The result? The United States has lost over 5 million well-paying manufacturing jobs.

This industrial exodus didn’t just hollow out factories — it gutted middle-class bargaining power. Once employers gained the ability to offshore production, they no longer had to reward rising productivity with higher wages. That historic link — more output, more pay — was severed.

Today, American workers face a brutal equation: Take the deal on the table, or the job goes to China. The “race to the bottom” isn’t a slogan. It’s an economic policy — and it’s killing the American middle class.

Offshoring has crippled American industry, turning the United States into a nation dependent on foreign suppliers.

Technology offers the clearest example. In 2024, the U.S. imported $763 billion in advanced technology products. That includes a massive trade deficit in semiconductors, which power the brains of everything from fighter jets to toasters. If imports stopped, America would grind to a halt.

Worse, America doesn’t even make the machines needed to produce chips. Photolithography systems — critical to chip fabrication — come from the Netherlands. They’re shipped to Taiwan, where the chips are made and then sold back to the U.S.

The entire supply chain lives offshore. America doesn’t just import chips — it imports the ability to make them. That’s not just dependency. That’s a massive strategic vulnerability.

And the problem extends far beyond tech. The U.S. imports its steel, ball bearings, cars, and oceangoing ships. China now builds far more commercial vessels than the United States — by orders of magnitude.

How can America call itself a global power when it can no longer command the seas?

What happens if China stops shipping silicon chips to the U.S.? Or if it cuts off something as basic as shoes or light bulbs? No foreign power should hold that kind of leverage over the American people. And while China does, America isn’t truly free. No freer than a newborn clinging to a bottle. Dependence breeds servitude.

Make America self-sufficient again

Trump has precious little time to prove that reindustrializing America isn’t just a slogan — it’s possible. But he won’t get there with half-measures. “Reciprocal” tariffs? That’s a distraction. Pausing tariffs for 90 days to sweet-talk foreign leaders? That delays progress. Spooking the stock market with mixed signals? That sabotages momentum.

To succeed, Trump must start with one urgent move: establish high, stable tariffs — now, not later.

Tariffs must be high enough to make reshoring profitable. If it’s still cheaper to build factories in China or Vietnam and just pay a tariff, then the tariff becomes little more than a tax — raising revenue but doing nothing to bring industry home.

What’s the right rate? Time will tell, but Trump doesn’t have time. He should impose immediate overkill tariffs of 100% on day one to force the issue. Better to overshoot than fall short.

That figure may sound extreme, but consider this: Under the American System, the U.S. maintained average tariffs above 30% — without forklifts, without container ships, and without globalized supply chains. In modern terms, we’d need to go higher just to match that level of protection.

South Korea industrialized with average tariffs near 40%. And the Koreans had key advantages — cheap labor and a weak currency. America has neither. Tariffs must bridge the gap.

Just as important: Tariffs must remain stable. No company will invest trillions to reindustrialize the U.S. if rates shift every two weeks. They’ll ride out the storm, often with help from foreign governments eager to keep their access to American consumers.

President Trump must pick a strong, flat tariff — and stick to it.

This is our last chance

Tariffs must also serve their purpose: reindustrialization. If they don’t advance that goal, they’re useless.

Start with raw materials. Industry needs them cheap. That means zero tariffs on inputs like rare earth minerals, iron, and oil. Energy independence doesn’t come from taxing fuel — it comes from unleashing it.

Next, skip tariffs on goods America can’t produce. We don’t grow coffee or bananas. So taxing them does nothing for American workers or factories. It’s a scam — a cash grab disguised as policy.

Tariff revenue should fund America’s comeback. Imports won’t vanish overnight, which means revenue will flow. Use it wisely.

Cut taxes for domestic manufacturers. Offer low-interest loans for large-scale industrial projects. American industry runs on capital — Washington should help supply it.

A more innovative use of tariff revenue? Help cover the down payments for large-scale industrial projects. American businesses often struggle to raise capital for major builds. This plan fixes that.

Secure the loans against the land, then recoup them with interest when the land sells. It’s a smart way to jump-start American reindustrialization and build capital fast.

But let’s be clear: Tariffs alone won’t save us.

Trump must work with Congress to slash taxes and regulations. America needs a business environment that rewards risk and investment, not one that punishes it.

That means rebuilding crumbling infrastructure — railways, ports, power grids, and fiber networks. It means unlocking cheap energy from coal, hydro, and next-gen nuclear.

This is the final chance to reindustrialize. Another decade of globalism will leave American industry too hollowed out to recover. Great Britain was once the workshop of the world. Now it’s a cautionary tale.

Trump must hold the line. Impose high, stable tariffs. Reshore the factories. And bring the American dream roaring back to life.

Transhumanism Hasn’t Been The Paradise Mankind Thought It Would Be

If the long-awaited advent of the cyborg world is upon us, we will be forced to consider whether this is really what we want.

All Too Predictably, Reality Is Puncturing The AI Hype Bubble

It's becoming clear that both the optimism and pessimism surrounding the potential of AI has been vastly overblown.

How to build a free internet



In February 1976, Bill Gates wrote an “open letter” to all those using home computers and sent it off to be published in a dozen computer-enthusiast zines. In the letter – barely a page long – Gates asks, “Will quality software be written for [home computer users]?” and tells users that the answer is up to them. The answer did not follow a philosophical musing nor an inspiring call to action. Instead, it was the equivalent of that peculiar “FBI Warning” at the beginning of movies: The people using his software were stealing his goods and needed to stop — end of discussion.

But what were they stealing exactly? Gates’ position was that his company’s software – in this case, Altair BASIC, or a piece of software enabling home users to write software – took time to write, required the labor of specialists, and that its theft would take bread out of the mouths of professional programmers’ children. The home computer users (or “hobbyists” as they were accurately called) were fundamentally stealing their time.

As Big Tech attempts to reduce humanity to machines emptied of dopamine and God, we depend on recognizing the simple tools that existed before it and still reside deep in its core.

Gates’ argument is familiar to Millennials in the form of Metallica’s harsh position toward super-fans downloading its music from file-sharing networks, but it also raises an important question: When you pay for an abstract work – an album, novel, or computer program – are you paying for some end-product, widget, or thing? Or are you paying for something else, more akin to an experience, a journey, a recipe, or a community?

Open source is an argument and movement of software writers, hardware developers, and other tinkerers who hold that whatever it is, it must certainly include the ability — or even the positive right — to study how it works, to modify it, and to share it. Computing itself would not exist without this predicate, just as music itself would not meaningfully exist without musicians studying, making, and sharing it.

The most prevalent software today is open source, and you may not know it. As you read this on your computer, you run thousands of interconnected programs and libraries, each produced by collaborative and independent efforts. Some were written on paper throughout development; others were distributed among developers using cassettes and floppy disks. Nowadays, their work, the code or recipes that influence how a computer behaves, is often published openly at repositories or places like GitHub and listed in innumerable directories. Open-source hardware and kits abound at sources like CrowdSupply, with much accessible to power users. No electrical engineering degree needed.

Open-source machines

Open-source software – used to generate documents that can be served to users through a web browser – powers perhaps a third of all websites today, ranging from personal blogs to major publications. The web browser you’re using may be Firefox, Chrome, or Brave, each substantially open source and composed of smaller units of open-source software. One such component of many thousands is SQLite, or an embedded database developed by people of a strong Christian ethic, which browsers may use to store your user settings on your hard drive.

None of this would get off the ground without a computer’s operating system, the servers hosting websites, or firmware enabling your power button to work. Much of the implicated software is open source or has an open-source variant or competitor, and the closed-source ones will depend on open-source pieces.

Open source means that the developer or hobbyist can study how all these work, guide the computer's process, and distribute what follows or some crucial aspect. Millions of developers can make tweaks or build upon such code, and billions of users can choose to learn and do the same. Anyone can access the code or recipe of the open-source program if they like, digging behind the vision projected on the screen. Present illiteracy in code need not foreclose future possibilities. Amateurs and professionals choose how far they wish to go and what value they seek to use and provide.

All this rests in some tension with copyright, which fundamentally must say that it is illegal to combine these words and to share this combination. However, open source challenges the most myopic individualist or collectivist account of human beings and their labor, suggesting decentralized or less-centralized work need not decay into collectivism or other forms of indentured labor.

The tools of freedom

Contrary to the skeptic’s illusions and Gates’ implication, open software development forecloses any practical need for indenture here. It’s voluntary training for those interested and with some aptitude. To the extent that a “thing” is produced, it is like a complicated handbook to be followed by specialists, raw material for iteration or adaptation, and only then becomes something that end-users find valuable.

My vocation leads me to ask if you want to learn to code. And if you do, let me show you how I did it. (To begin, pick one and stick with it.) At the same time, I have no illusions that you, the reader, are reading this to learn how to code or read these precise words in this order. When you subscribe to Return, are you paying developers to run software or paying writers to use some particular verb or notebook? Or are you paying for a journey, a recipe, a community, or the possibility of knowledge?

The latter may seem sentimental, but it is also ruthlessly pragmatic in preserving liberty and human-scale problems. You, the reader, have a challenge or problem, and you want a solution to that problem. Perhaps you’re faced with idle boredom and wish to satiate it. Or you’re looking to enrich yourself and take on some intellectual and experiential challenge. Perhaps you want to find and interact with others who share your quirk or interest and to build; your problem and the solution (or the mere start of a solution) will vary in scope and depth. The value you assign to it, through the use of money and your time, will vary accordingly. We can have a philosophical discussion on copyright or intellectual property or see what happens when we propose some challenge to its premises and sidestep any simple answer. Proper open source respects copyright while having its doubts.

In actual practice, open source gives rise not to tyrannical corporatism nor collectivistic authoritarianism but an aristocratic or republican form that permits and encourages virtue. It results neither in a marketplace of identical mass-produced products targeted at a collectivistic consumer nor an impoverished marketplace of stale or absent bread. It is the bazaar or an organized flea market of plentiful variety, rich options, and entrepreneurial small- and medium-sized creators. It’s not anarchistic but self-governing at its best.

Open source allows you to take action and exert your will while expecting the ordinary and leaving the possibility of virtue open. And you may begin as a casual reader (not that there’s anything wrong with that!), or orient yourself to becoming a more engaged participant. The bazaar marketplace extends from freelance developers and designers through to illustrators, culture and technology writers, support technicians, documentation writers, small businesses that want to catch up, and big businesses that want to catch up. Conservative, liberal, and libertarian manifestations and flavors of the open-source ethos exist in variouslicense manifestos: BSD, GPL, public domain, and many others. The proliferation of open source and its subsequent ecosystems casts some doubt on Gates’ early prediction of developer impoverishment.

Recall the modest SQLite database that I mentioned. Open source and free, the software is embedded in billions of devices to enable simple functionality that users expect. It’s also very obscure to users and relatively mature, so you might think the developers are forgotten and destitute. But far from impoverished, its authors command quite a bounty for their support services. Many thousands and millions derive and create value from the software, whether as developers using it as one tool in their kit, or, crucially but incidentally, as an end-user who likes to bookmark websites or use a functioning remote control.

Open source has powered much of our experience with computers, and its ideology is more relevant than ever before. As Big Tech attempts to reduce humanity to machines emptied of dopamine and God, we depend on recognizing the simple tools that existed before it and still reside deep in its core.

We needn’t return far to uncover a more fruitful path to human flourishing in technology. There is no need to rewrite networking or computers from scratch; many building blocks remain from the early blogosphere and computing history and are increasingly relevant. We need mostly the will, creativity, and courage to catechize the bots. There’s a map to building self-sustaining, virtuous, and indeed profitable institutions, and it can be found throughout open source. The question is whether we will look at it or stare right through it.