Transhumanism Hasn’t Been The Paradise Mankind Thought It Would Be
If the long-awaited advent of the cyborg world is upon us, we will be forced to consider whether this is really what we want.
In February 1976, Bill Gates wrote an “open letter” to all those using home computers and sent it off to be published in a dozen computer-enthusiast zines. In the letter – barely a page long – Gates asks, “Will quality software be written for [home computer users]?” and tells users that the answer is up to them. The answer did not follow a philosophical musing nor an inspiring call to action. Instead, it was the equivalent of that peculiar “FBI Warning” at the beginning of movies: The people using his software were stealing his goods and needed to stop — end of discussion.
But what were they stealing exactly? Gates’ position was that his company’s software – in this case, Altair BASIC, or a piece of software enabling home users to write software – took time to write, required the labor of specialists, and that its theft would take bread out of the mouths of professional programmers’ children. The home computer users (or “hobbyists” as they were accurately called) were fundamentally stealing their time.
As Big Tech attempts to reduce humanity to machines emptied of dopamine and God, we depend on recognizing the simple tools that existed before it and still reside deep in its core.
Gates’ argument is familiar to Millennials in the form of Metallica’s harsh position toward super-fans downloading its music from file-sharing networks, but it also raises an important question: When you pay for an abstract work – an album, novel, or computer program – are you paying for some end-product, widget, or thing? Or are you paying for something else, more akin to an experience, a journey, a recipe, or a community?
Open source is an argument and movement of software writers, hardware developers, and other tinkerers who hold that whatever it is, it must certainly include the ability — or even the positive right — to study how it works, to modify it, and to share it. Computing itself would not exist without this predicate, just as music itself would not meaningfully exist without musicians studying, making, and sharing it.
The most prevalent software today is open source, and you may not know it. As you read this on your computer, you run thousands of interconnected programs and libraries, each produced by collaborative and independent efforts. Some were written on paper throughout development; others were distributed among developers using cassettes and floppy disks. Nowadays, their work, the code or recipes that influence how a computer behaves, is often published openly at repositories or places like GitHub and listed in innumerable directories. Open-source hardware and kits abound at sources like CrowdSupply, with much accessible to power users. No electrical engineering degree needed.
Open-source software – used to generate documents that can be served to users through a web browser – powers perhaps a third of all websites today, ranging from personal blogs to major publications. The web browser you’re using may be Firefox, Chrome, or Brave, each substantially open source and composed of smaller units of open-source software. One such component of many thousands is SQLite, or an embedded database developed by people of a strong Christian ethic, which browsers may use to store your user settings on your hard drive.
None of this would get off the ground without a computer’s operating system, the servers hosting websites, or firmware enabling your power button to work. Much of the implicated software is open source or has an open-source variant or competitor, and the closed-source ones will depend on open-source pieces.
Open source means that the developer or hobbyist can study how all these work, guide the computer's process, and distribute what follows or some crucial aspect. Millions of developers can make tweaks or build upon such code, and billions of users can choose to learn and do the same. Anyone can access the code or recipe of the open-source program if they like, digging behind the vision projected on the screen. Present illiteracy in code need not foreclose future possibilities. Amateurs and professionals choose how far they wish to go and what value they seek to use and provide.
All this rests in some tension with copyright, which fundamentally must say that it is illegal to combine these words and to share this combination. However, open source challenges the most myopic individualist or collectivist account of human beings and their labor, suggesting decentralized or less-centralized work need not decay into collectivism or other forms of indentured labor.
Contrary to the skeptic’s illusions and Gates’ implication, open software development forecloses any practical need for indenture here. It’s voluntary training for those interested and with some aptitude. To the extent that a “thing” is produced, it is like a complicated handbook to be followed by specialists, raw material for iteration or adaptation, and only then becomes something that end-users find valuable.
My vocation leads me to ask if you want to learn to code. And if you do, let me show you how I did it. (To begin, pick one and stick with it.) At the same time, I have no illusions that you, the reader, are reading this to learn how to code or read these precise words in this order. When you subscribe to Return, are you paying developers to run software or paying writers to use some particular verb or notebook? Or are you paying for a journey, a recipe, a community, or the possibility of knowledge?
The latter may seem sentimental, but it is also ruthlessly pragmatic in preserving liberty and human-scale problems. You, the reader, have a challenge or problem, and you want a solution to that problem. Perhaps you’re faced with idle boredom and wish to satiate it. Or you’re looking to enrich yourself and take on some intellectual and experiential challenge. Perhaps you want to find and interact with others who share your quirk or interest and to build; your problem and the solution (or the mere start of a solution) will vary in scope and depth. The value you assign to it, through the use of money and your time, will vary accordingly. We can have a philosophical discussion on copyright or intellectual property or see what happens when we propose some challenge to its premises and sidestep any simple answer. Proper open source respects copyright while having its doubts.
In actual practice, open source gives rise not to tyrannical corporatism nor collectivistic authoritarianism but an aristocratic or republican form that permits and encourages virtue. It results neither in a marketplace of identical mass-produced products targeted at a collectivistic consumer nor an impoverished marketplace of stale or absent bread. It is the bazaar or an organized flea market of plentiful variety, rich options, and entrepreneurial small- and medium-sized creators. It’s not anarchistic but self-governing at its best.
Open source allows you to take action and exert your will while expecting the ordinary and leaving the possibility of virtue open. And you may begin as a casual reader (not that there’s anything wrong with that!), or orient yourself to becoming a more engaged participant. The bazaar marketplace extends from freelance developers and designers through to illustrators, culture and technology writers, support technicians, documentation writers, small businesses that want to catch up, and big businesses that want to catch up. Conservative, liberal, and libertarian manifestations and flavors of the open-source ethos exist in variouslicense manifestos: BSD, GPL, public domain, and many others. The proliferation of open source and its subsequent ecosystems casts some doubt on Gates’ early prediction of developer impoverishment.
Recall the modest SQLite database that I mentioned. Open source and free, the software is embedded in billions of devices to enable simple functionality that users expect. It’s also very obscure to users and relatively mature, so you might think the developers are forgotten and destitute. But far from impoverished, its authors command quite a bounty for their support services. Many thousands and millions derive and create value from the software, whether as developers using it as one tool in their kit, or, crucially but incidentally, as an end-user who likes to bookmark websites or use a functioning remote control.
Open source has powered much of our experience with computers, and its ideology is more relevant than ever before. As Big Tech attempts to reduce humanity to machines emptied of dopamine and God, we depend on recognizing the simple tools that existed before it and still reside deep in its core.
We needn’t return far to uncover a more fruitful path to human flourishing in technology. There is no need to rewrite networking or computers from scratch; many building blocks remain from the early blogosphere and computing history and are increasingly relevant. We need mostly the will, creativity, and courage to catechize the bots. There’s a map to building self-sustaining, virtuous, and indeed profitable institutions, and it can be found throughout open source. The question is whether we will look at it or stare right through it.
The 2020s have seen unprecedented acceleration in the sophistication of artificial intelligence, thanks to the rise of large language model technology. These machines can perform a wide range of tasks once thought to be solvable only by humans: write stories, create art from text descriptions, and solve complex tasks and problems they were not trained to handle.
We posed two questions to six AI experts: James Poulos, roon, Robin Hanson, Niklas Blanchard, Max Anton Brewer, and Anton Troynikov. —Eds.
1. What year do you predict, with 50% confidence, that a machine will have artificial general intelligence — that is, when will it match or exceed most humans in every learning, reasoning, or intellectual domain?
2. What changes to society will this effect within five years of occurring?
AGI is almost here; it will arrive by 2025. And it will be like contacting alien life. Artificial general intelligence, by this definition, will be achieved by 2025. I base this estimate on a proprietary divination of my own design, called “looking at Metaculus trends and squinting.”
Metaculus is a collective forecasting site where people bet against each other to predict future events. The aggregated predictions are more accurate than an individual forecaster could produce, with an average Brier Score – a measure of prediction accuracy – of 0.105. Extremely good!
The current community prediction for this type of digital-only general AI is 2028, but it has consistently trended downward. Extrapolated out, the trend line reaches 2025 by 2025.
Of course, it’s foolish to expect any trend to keep going in a straight line. But AI progress is showing no signs of slowing down — if anything, it is accelerating. I think the average person, even the average Metaculus user, is less AI-aware than I am, so their predictions will be too conservative.
It’s actually an unfortunate thing to know about because there’s very little that any of us can do about it. The world will change because of this, for better or worse. Many people find this gives them a sense of despair.
It is difficult to predict how radically AI will change society – it will be faster and stranger than anyone can imagine. It’s easier to predict how people will react. Many people will think it is the apocalypse, just as they do with so many things now. The world is changing too fast for anyone to understand, and AI research is even more of an inexplicable hyperobject.
When this intelligence is made publicly available, It will consume a huge amount of industrial and social attention. People will anthropomorphize it, demonize it, worship it. They will f*** it. They will shoot it with guns.
People will build a body for the AI, connected to sensors and actuators. We already have. There are some AIs everywhere, behind every camera in every phone in the world. These will be swapped for a general AI as soon as it’s worth the cost. They will watch over us.
And unlike humans, digital people can swap and share memories. They can clone themselves to run in parallel. They can operate on their own code, their own datasets, and their own architectures. They can inhabit virtual spaces, as NPCs in games already do. We will let them into our world, and we will visit theirs. But moving atoms is harder than moving bits, and robotic technologies will take some time to reach the bend in their hockey stick.
Politics will realign around this new axis, as it would if we contacted aliens.
Religious believers and social justice activists will band together with new-age QAnon types to fight the satanic/imperialist/oppressive/New World Order digital people. With no regulatory or technological recourse, some will turn to terrorism. Religions will be forced to make statements on whether the AI has a soul.
Billionaires and autocrats, furries and art pirates, gamers, and makers will set aside their differences to build the metaverse and populate it with digital people. Many of them will never return, permanently brain-fried or chronically trapped in VR. People’s realities will diverge as they experience an endless stream of personalized entertainment.
Businesses and governments will attempt to regulate digital people, before that fails and they begin to adapt. Most people will use AI at work, as an oracle and an analyst, whether or not they are officially supposed to.
Within five years of the invention of human-level AI, the world will run on spreadsheet cells like:
=AI(“Predict the best question to ask about the above data, then answer it. Think in steps.”)
Everyone will know it. But we will pretend it isn’t, and we will muddle along anyway. We will make it through the apocalypse. At least long enough to see the next one.
An Australia intelligence agency is funding research attempting to merge artificial intelligence with human brain cells.
According to The Guardian, "Research into merging human brain cells with artificial intelligence has received a $600,000 grant from defense and the Office of National Intelligence (ONI)."
The funding from the Australia National Intelligence and Security Discovery Research Grants Program will go to research being conducted by the Monash University and Cortical Labs.
Adeel Razi, the project's lead and associate professor from the Monash University's Turner Institute for Brain and Mental Health, explained, "This new technology capability in future may eventually surpass the performance of existing, purely silicon-based hardware."
Last year, the research team created a "DishBrain" – a "semi-biological computer chip with some 800,000 human and mouse brain cells lab-grown into its electrodes," according to New Atlas. The DishBrain utilizes lab-cultivated neurons from human stem cells.
The scientists were able to train the brain cells to play the classic video game "Pong."
The outlet added, "The micro-electrode array at the heart of the DishBrain was capable both of reading activity in the brain cells, and stimulating them with electrical signals, so the research team set up a version of 'Pong' where the brain cells were fed a moving electrical stimulus to represent which side of the 'screen' the ball was on, and how far away from the paddle it was. They allowed the brain cells to act on the paddle, moving it left and right."
Some experts contend that the brain-powered Biological Intelligence Operating System is the future of AI because it is self-programming, requires less memory, conserves energy, and can learn throughout its lifetime like human brain cells.
"The outcomes of such research would have significant implications across multiple fields such as, but not limited to, planning, robotics, advanced automation, brain-machine interfaces, and drug discovery, giving Australia a significant strategic advantage," Razi said in a statement.
"We will be using this grant to develop better AI machines that replicate the learning capacity of these biological neural networks," he continued. "This will help us scale up the hardware and methods capacity to the point where they become a viable replacement for in silico computing."
TechCrunch reported in April that Cortical Labs received $10 million in funding – including from the investment arm of the U.S. Central Intelligence Agency.
The tech outlet reported, "It’s now raised a $10 million funding round led by Horizons Ventures, with participation from LifeX (Life Extension) Ventures, Blackbird Ventures, Radar Ventures and In-Q-Tel (the venture arm of the CIA)."
The outlet described Cortical Labs as combing "synthetic biology and human neurons to develop what it claims is a class of AI, known as 'Organoid Intelligence' (OI)."
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Human brain cells in a dish learn to play Pong www.youtube.com
Google released Bard in March, an artificial intelligence tool touted as ChatGPT's rival. Just weeks into this public experiment, Bard has already defied expectations and ethical boundaries.
In an interview with CBS' "60 Minutes" that aired Sunday, Google CEO Sundar Pichai admitted that there is a degree of impenetrability regarding generative AI chatbots' reasoning.
"There is an aspect of this which we call ... a 'black box.' You know, you don't fully understand," said Pichai. "You can't quite tell why it said this or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that's where the state of the art is."
CBS' Scott Pelley asked, "You don't fully understand how it works and yet you've turned it loose on society?"
"Let me put it this way: I don't think we fully understand how a human mind works either," responded Pichai.
Despite citing ignorance on another subject as a rationale for blindly releasing new technology into the wild, Pichai was nevertheless willing to admit, "AI will impact everything."
Google describes Bard on its website as "a creative and helpful collaborator" that "can supercharge your imagination, boost your productivity, and help you bring your ideas to life—whether you want help planning the perfect birthday party and drafting the invitation, creating a pro & con list for a big decision, or understanding really complex topics simply."
While Pichai and other technologists have highlighted possible benefits of generative AI, Goldman Sachs noted in a March 26 report, "Generative AI could expose the equivalent of 300 million full-time jobs to automation."
In 2019, then-candidate Joe Biden told coal miners facing unemployment to "learn to code." In a twist of fate, the Goldman Sachs report indicated that coders and technologically savvy white-collar workers face replacement by Bard-like AI models at higher rates than those whose skills were only yesteryear denigrated by the president.
Legal, engineering, financial, sales, forestry, protective service, and education industries all reportedly face over 27% workforce exposure to automation.
In addition to losing hundreds of millions of jobs, truth may also be lost in the corresponding inhuman revolution.
"60 Minutes" reported that James Manyika, Google's senior vice president of technology and society, asked Bard about inflation. Within moments, the tool provided him with an essay on economics along with five recommended books, ostensibly as a means to bolster its claims. However, it soon became clear that none of the books were real. All of the titles were pure fictions.
Pelley confronted Pichai about the chatbot's apparent willingness to lie, which technologists reportedly refer to as "error with confidence" or "hallucinations."
For instance, according to Google, when prompted about how it works, Bard will often times lie or "hallucinate" about how it was trained or how it functions.
"Are you getting a lot of hallucinations?" asked Pelley.
"Yes, you know, which is expected. No one in the, in the field, has yet solved the hallucination problems. All models do have this as an issue," answered Pichai.
The Google CEO appeared uncertain when pressed on whether AI models' eagerness to bend the truth to suit their ends is a solvable problem, though noted with confidence, "We'll make progress."
\u201cOne AI program spoke in a foreign language it was never trained to know. This mysterious behavior, called emergent properties, has been happening \u2013 where AI unexpectedly teaches itself a new skill. https://t.co/v9enOVgpXT\u201d— 60 Minutes (@60 Minutes) 1681687340
Bard is not just a talented liar. It's also an autodidact.
Manyika indicated Bard has evidenced staggering emergent properties.
Emergent properties are the attributes of a system that its constituent parts do not have on their own but arise when interacting collectively or in a wider whole.
Britannica offers a human memory as an example: "A memory that is stored in the human brain is an emergent property because it cannot be understood as a property of a single neuron or even many neurons considered one at a time. Rather, it is a collective property of a large number of neurons acting together."
Bard allegedly had no initial knowledge of or fluency in Bengali. However, need precipitated emergence.
"We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali. So now, all of a sudden, we now have a research effort where we're now trying to get to a thousand languages," said Manyika.
These talented liars capable of amassing experience unprompted may soon express their competence in the world of flesh and bone — on factory floors, on soccer fields, and in other "human environments."
Raia Hadsell, vice president of research and robotics at Google's DeepMind, told "60 Minutes" that engineers helped teach their AI program how to emulate human movement in a soccer game. However, they prompted the self-learning program not to move like a human, but to learn how to score.
Accordingly, the AI, preoccupied with the ends, not the means, evolved its understanding of motion, discarding ineffective movements and optimizing its soccer moves in order to ultimately score more points.
"This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments. You know, think about mining, think about dangerous construction work or exploration or disaster recovery," said Hadsell.
The aforementioned Goldman Sachs report on jobs lost by automation did not appear to factor in the kind of self-learning robots Hadsell envisions marching out into the world.
Prior to the conquest of human environments by machines, there are plenty of threats already presented by these new technologies that may first need to be addressed.
Newsweek prompted Bard's competitor ChatGPT about risks that AI technology could pose, and it answered: "As AI becomes more advanced, it could be used to manipulate public opinion, spread propaganda, or launch cyber-attacks on critical infrastructure. AI-powered social media bots can be used to amplify certain messages or opinions, creating the illusion of popular support or opposition to a particular issue. AI algorithms can also be used to create and spread fake news or disinformation, which can influence public opinion and sway elections."
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
A man in Arizona was arrested after he was accused of charging nearly $140,000 worth of personal stuff to his company's Amazon account, in addition to other crimes.
Back in August, police were called to investigate whether Darius O’Neal Hickson of Florence, Arizona, used the Amazon corporate account owned by his employer, West Pharmaceuticals, to purchase hundreds of items totaling $137,000.
Between August 2020 and August 2021, O'Neal Hickson reportedly bought a variety of items — such as video games, consoles, clothing, and BB guns — and directed all of them to be delivered to the West Pharmaceuticals facility in Scottsdale where he worked. Though West Pharmaceuticals does regularly monitor employee purchases to its Amazon account, it flags only those worth more than $250, and all of O'Neal Hickson's purchases reportedly fell below that threshold.
When confronted with the accusations, O'Neal Hickson allegedly told investigators that he had "accidentally" forgotten to switch over to his personal account for the purchases. He also reportedly joked with his employer that he would repay the company using a "$5.99 a month payment plan."
In addition to making the Amazon purchases, O'Neal Hickson has been accused of turning around and reselling some of the Amazon items for an unspecified amount of money.
And those Amazon items are not the only items O'Neal Hickson has been accused of stealing and reselling. According to reports, police believe he also stole several computers from West Pharmaceuticals, which supposedly cost the company a total of $144,000, and then resold them, putting an additional $71,690 into his pocket.
West Pharmaceuticals has released a statement about the allegations:
We can confirm that Darius O’Neil Hickson worked for West Pharmaceutical Services from 2013 through August of 2021.
At West, we have a zero-tolerance policy for theft, and we are grateful that this situation was brought to our attention by an alert West team member. Because of the ongoing criminal investigation related to this case, we are not able to provide any additional information at this time.
KTVK has confirmed that O'Neal Hickson is no longer with the company.
O'Neal Hickson was arrested at his home on September 26 and charged with one count of theft, a class 2 felony. His bond was set at $2,500, which he must have paid since he is now free as he awaits prosecution.
The first clinical trials testing a human brain-computer interface will soon take place in the U.S.
The company developing the interface, Synchron Inc., is a competitor of Elon Musk’s Neuralink Corp. Synchron Inc. beginning clinicals puts the company on a path toward mainstreaming controversial technology that could have wider use in helping people overcome disabilities and paralysis.
Bloomberg reported that the company’s early feasibility study to determine whether the product is even practical is being funded by the National Institutes of Health. The study is supposed to determine how the device can be integrated with the human brain safely. If all goes according to plan, the clinical trial will be able to assess how people with disabilities or paralysis can control digital devices hands-free.
This trial represents a landmark in that it will be the first clinical trial conducted by a startup working on brain-machine interfaces; should the clinical trial be successful, Synchron will begin working to sell the product.
Synchron’s clinical puts the company ahead of Musk’s Neuralink. Last year Neuralink raised $205 million, while Synchron raised $70 million.
It is believed that brain-computer interfaces have the ability to empower millions of disabled people to more easily communicate with other people and engage in modern life. According to data gathered by the CDC, paralysis affects more than five million people in the U.S. Brain-computer interface technologies theoretically could alleviate some of the difficulties in these people’s lives.
Synchron’s device, once implanted, travels to the brain through the body’s vascular system, whereas Musk’s Neuralink is implanted directly into the receiver’s skull. Once Synchron’s device reaches the brain, parts of the device translate brain activity into signals that allow text messaging, emailing, online shopping, or other various activities using a paired external device.
In the past, brain-computer interfaces have received regulatory approval to treat patients on a temporary basis, but if Synchron’s trial is successful, the company would secure approval from the U.S. Food and Drug Administration for long-term use. If the clinical trial is successful, this technology will take a giant step forward toward commercial availability.
The Synchron study will involve six American patients in New York City and Pittsburgh. The first patient was enrolled this week at Mt. Sinai Hospital in New York. The patient’s identity and demographic information are being kept private.
Should this clinical trial be successful, the next step forward for Snychron will be conducting a wider trial to test for efficacy.