by William Bryk
The science fiction writer Arthur Clarke famously wrote, “Any sufficiently advanced technology is indistinguishable from magic.” Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. It’s known as artificial super-intelligence (“ASI”), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.
We’ve all encountered artificial intelligence (“AI”) in the media. We hear about it in science fiction movies like “Avengers Age of Ultron” and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.
In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence could progress from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast could read all of the books in the Library of Congress the first millisecond you press “enter” on the program, and then integrate all that knowledge into a comprehensive analysis of humanity’s 4,000 year intellectual journey before your next blink.
The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on Turing Machines, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switches—on’s and off’s, 0’s and 1’s—could think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBM’s AI bot “Watson” sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBM’s Watson will analyze your medical records and become your personal, all-knowing doctor.3
While these soon-to-come achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called “intelligence” to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, she’ll say “Starting Facetime with Matt Soffer.” A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.
This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)—roughly defined as a machine intelligence that outperforms humans in all intellectual tasks— the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check “never” if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI will be created within just 65 years.4
HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I.J. Good said it best: “The first ultraintelligent machine is the last invention that man need ever make ….”5
There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6
The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of “Whole Brain Emulation” is to copy or simulate our brain’s neural networks, taking advantage of nature’s millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switch—it either fires or it doesn’t. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White House’s Brain initiative8 and the EU’s Human Brain Project.9 In reality, these two routes to human level machine intelligence—algorithmic and emulation—are not black and white. Whatever technology achieves HLMI will probably be a combination of the two.
Once HLMI is achieved, the rate of advancement could increase very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (roughly defined as an intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4
Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMI’s. Thus, once HLMI’s truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMI’s will set the smarter HLMI’s to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.
Here is a reimagining of a human-computer dialogue taken from the collection of short stories, “Angels and Spaceships”:11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted “Program Complete.” The programmer types, “What have you read?” and the program responds, “The entire internet. Ask me anything.” After deliberating for a few seconds, one of the programmers types, hands trembling, “Do you think there’s a God?” The computer instantly responds, “There is now.”
This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10
Another understandable doubt may be that it’s hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0’s and 1’s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by science—it is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent property—a result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter— then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0’s and 1’s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why can’t it be on another?
Thus, under the assumption that superintelligence is possible and may happen within a century or so, the world is reaching a critical point in history. First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.
The implications of this intelligence for society would be far-reaching—in some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects— and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence. The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12
When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% “extremely good,” 28% “good,” 17% “neutral,” 13% “bad,” and 18% “extremely bad” (existential catastrophe).4 18% is not a statistic to take lightly.
Although artificial superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Plato’s cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we don’t even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that, within a century, we could bear witness to the greatest answers of all time. Are we ready to take the risk?
William Bryk ‘19 is a freshman in Canaday Hall.
Works Cited
- Turing, A. On Computable Numbers, with an Application to the Entscheidungsproblem. Oxford Journal. 1936, 33.
- Plambeck, J. A Peek at the Future of Artificial Intelligence and Human Relationships. The New York Times, Aug. 7, 2015.
- Markoff, J. Computer Wins on ‘Jeopardy!’: Trivial, It’s Not. The New York Times, Feb. 16, 2011.
- Müller, V. C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. Synthese Library. 2014, 9-13.
- Good, I. J. Speculations Concerning the First Ultraintelligent Machine. Academic Pres. 1965, 33.
- Sharpe, L. Now You Can Turn Your Photos Into Computerized Nightmares with ‘Deep Dream’. Popular Science, July 2, 2015.
- Bostrom, N. Superintelligence Paths, Dangers, Strategies; Oxford University Press: Oxford, 2014; pp. 30-36.
- Brain Initiative. The White House [Online], Sept. 30, 2014, whitehouse. gov/share/brain-initiative (accessed Oct. 20, 2015).
- Human Brain Project [Online], humanbrainproject.eu (accessed Oct. 21, 2015).
- Kurzweil, R. The Singularity is Near; Penguin Books: England, 2005; pp. 10-14.
- Brown, F. “Answer.” Angels and Spaceships; Dutton: New York, 1954.
- Pagliery, J. Elon Musk and Stephen Hawking warn over ‘killer robots’. The New York Times, July 28, 2015.
Excellent article!