Adapted from a talk delivered at the New York Society for Ethical Culture, December 20, 2009.
A Brave New World
The chance of gain is by every man more or less over-valued, and the chance of loss is by most men undervalued, and by scarce any man, who is in tolerable health and spirits, valued more than it is worth.
From Wealth of Nations, by Adam Smith
When I first gave the talk on which this series of posts is based, I promoted it as:
… a walk around the world of alternative intelligence with a few stops to consider the meaning of life in a world of rapid technological advancement.
By the time I had researched and written the talk, I amended that description to be a walk around the world of alternative being.
I am not an expert on computers, artificial intelligence, molecular electronics, the meaning of life or any of the other technologies and philosophical questions I may touch on directly or indirectly. I don’t have any particular qualifications to be writing about this, other than being human, as curious as the next guy or gal, read a lot, and have always wondered about what it all adds up to.
The seed of the 2009 talk was planted by a New York Times article by John Markoff published in July of that year. The article was about a conference that took place during February of the previous year. The world’s leading computer and robotic scientists met to discuss the implications of, and ethical issues raised by, emerging technologies that can increasingly simulate human intelligence and emotions.
The conference took place at the Asilomar Conference Grounds on Monterey Bay in California, the same site used in 1975 by the world’s leading biologists to discuss the possible hazards and ethical implications of genetic engineering.
Among their concerns were the possible criminal uses of artificial intelligence; the potential for significant job loss as intelligent machines assume increasing amounts of the human workload; the possibility of machines becoming capable of making life and death decisions on their own. On that last point, the article pointed to the predator drones in use in Iraq and Afghanistan and statements by the Air force about plans to deploy a broad range of drones, from strategic bombers to nano-sized spy bots. As computer technology advances, the Air force envisions swarms of drones mounting “preprogrammed attacks on their own.”
According to scientists at the conference “we have reached the cockroach stage of machine intelligence.”
My AI antennas became fully engaged. I signed up for a blog called “Smart Planet,” which regularly posted juicy tech items like a link to a video of a remote control beetle. Scientists had managed to implant electrodes in a rather large beetle and were able to make it turn right or left by remote control.
Even before this, a web community of architects using the same cad program I used at the time, posted a link to this video which I found astonishing:
Big Dog and the remote control beetle are DARPA projects. DARPA is the defense department’s weird science arm. And speaking of arms, one last peak at a DARPA project that addresses a compelling need but also has some further implications by logical extension:
My antennas were not only up, it was really starting to get interesting! And it got better, or more worrisome, depending on how much of a technophobe you are. I came across two articles about robotic technology and computers that can make scientific discoveries and intuit the laws of physics.
In the first case, scientists at Aberystwyth and Cambridge Universities in England had built a robot named Adam that was able to:
• Hypothesize that certain genes in a yeast code for certain important enzymes;
• Devise experiments to test the hypothesis;
• Run the experiments;
• Interpret the results;
• And use those findings to revise the original hypothesis and test it out further.
Researchers confirmed “that Adam’s hypotheses were both novel and correct.”
In the second case, researchers at Cornell University created a computer program that was able to derive the laws of motion from data about the movement of a pendulum in just over a day. The computer’s process relied on genetic algorithms practicing a kind of natural selection of ideas. With each pass through the data, equations are generated describing relationships in the dataset. Initially, all the equations are wrong, but some are less wrong than others. The computer retains the less wrong equations as a subset to work on, and in successive generations, arrives at equations that are fully correct.
The article ended with a quote form cognitive scientist Michael Atherton that indicates there is still a long way to go before humans are not needed in the process. I think he was trying to be comforting.
These examples of various types of robotics and alternative intelligence endeavor are a very few of the almost innumerable ways in which we were pushing on the boundaries of what intelligence, indeed, what being is.
Not long after the New York Times Article started me down the path of this talk, I stumbled across an article by Bill Joy, cofounder of Sun Microsystems, published in Wired magazine in April 2000. Bill Joy is a lifelong believer in the power of computational technology and has made a good living out of it. The article is entitled “Why The Future Doesn’t Need Us.” The lead in to the article is as follows:
“Our most powerful 21st-century technologies – robotics, genetic engineering, and nanotech – are threatening to make humans an endangered species.”
In the article, Joy marks his first encounter with inventor and futurist Ray Kurzweil as the moment his healthy concern for the ethical implications of new technology turned into serious alarm.
It was a quotation from Kurzweil’s book, The Age of the Spiritual Machine, which troubled him most deeply:
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite – just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.”
From the Unabomber Manifesto, by Theodor Kaczynski
Joy did not in any way condone the actions of Kaczynski whose bombs had hit as close to home as gravely injuring his friend David Gelernter, but he could not dismiss the argument.
Joy goes on to cite Hans Moravec’s book Robot : Mere Machine to Transcendent Mind, which presented a future for humanity of being supplanted by the intelligent technologies they have created. Moravec is a robotics technology expert who founded the robotics research program at Carnegie Mellon University.
Moravec speculated that eventually, and sooner than we all think, robotic technology will guide its own design and production. He believed our main job in this century would be to ensure the cooperation of these intelligent machines.
This is the end of part 1 of a five part series of posts on AI. Next week I will take up the terms Artificial and Unnatural and argue that they are not a useful way to think about technological progression. Subscribe here so you don’t miss it.