This page features a rough depiction of different views on the question of whether artificial general intelligence (AGI) will take off in a "hard" way (fast, no time for response or competition) or a "slow" way (more gradual, more time to integrate with society, possibility of competing projects). I plot these views against a very crude estimate of how long each forecaster has worked on commercial software (not counting academic computer science) as of 2014, when this graph was first created.
Contents
Caveats
- My sample is not at all random. It features mainly those people whose views on the hard/soft-takeoff question I know best.
- This comparison doesn't prove which side is right. It could be that people with more in-the-trenches experience are less inclined toward big-picture thinking. Roman Yampolskiy wrote regarding my graph: "Sometimes being too close to something prevents you from actually seeing the big picture. Every time I ask an Uber driver if they are worried about self-driving cars I either get a 'no' of they have no idea what I am taking about. Every time I ask a room full of accountants if they see blockchain and smart contracts as a threat I get blank stares."
- It's not clear that software experience beyond a few years provides much additional insight, so maybe the right skew in the graph isn't actually very relevant.
- It's not clear that commercial rather than academic work is the best kind of experience. The main distinction I wanted to capture was between people who build concrete, real-world systems (whether in academia or industry) vs. those who analyze AI scenarios mathematically or philosophically. By focusing on only commercial software experience, I chose a variable that would be more objective at the expense of being less relevant. Another weak reason to focus on commercial software is that academic software systems are not always built cleanly, robustly, scalably, and with extensive real-world use in mind, though the variance from project to project is high.
- Finally, and most importantly, this graph should not be construed as suggesting that thinking about AGI risk is unimportant. To the contrary, rogue AIs can take off slowly, and in general, I think shaping AGI trajectories is arguably the most important place where altruists can make a difference regardless of takeoff speed. My goal in sparking discussions about hard/soft takeoff is to encourage further thinking about the probable nature of AGI trajectories so that we can more effectively shape them. And it may be that if AGI takeoff is likely to be soft, then talking about this more openly would help avoid "boy who cried wolf" or "the sky is falling" problems down the road.
Further research
It would be interesting to slice these predictions along many other dimensions as well. For instance:
- Try a version that includes academic computer science along with industry work.
- Include any engineering experience. Mark_Friedenbach argues that "Elon Musk is a more reliable source about the timelines of engineering projects in general" than Ben Goertzel. I may agree given that Goertzel's timeline predicts AGI development way too soon, though I also think Goertzel has a deeper understanding of cognition than Musk.
- imuli created a plot of hard/soft expectations versus birth year. It omits a few of the more recent additions to my own chart.
It would furthermore be helpful to gather statistically valid data from surveys of AI experts. My graph here is just something I put together in a few hours based on what I already knew offhand. It's thus not particularly trustworthy.
Comparison with overall expert predictions
A more comprehensive collection of AI predictions is reported in "Future Progress in Artificial Intelligence: A Survey of Expert Opinion". One question asked for participants' beliefs regarding a hard takeoff:
Assume for the purpose of this question that [human-level machine intelligence] will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?
The median probability for "2 years" was 10% and for "30 years" was 75%. The mean probability for "2 years" was 19% plus or minus a standard deviation of 24%. If the data were distributed normally (which they probably weren't), this would imply about 15.9% of participants estimating a probability more than 43% (= 19% + 24% = one standard deviation above the mean). This breakdown largely aligns with what we see in my graph or may be even more biased toward soft takeoffs than the experts in my graph are.
Data used in the graph
Sources for views on takeoff speed
The views of several of the people in the above chart are described in "Hard vs. soft takeoff" on Wikipedia and in my "Thoughts on Robots, AI, and Intelligence Explosion". For further discussion of takeoff speeds in general, see also Michael Anissimov's "Hard Takeoff Sources".
Following are links to sources that describe the views of each person, possibly with quotes from those sources. Note that the most relevant question where takeoff speed is concerned is how quickly human-level AGIs would advance to a super-human level, rather than how quickly humanity reaches human-level AGI. (Nick Bostrom emphasizes the importance of this distinction in Superintelligence, since he believes that human-level AGI may require up to a century but that super-human AGI would probably follow shortly thereafter.) That said, in the absence of further information, I assumed in my chart that because Elon Musk predicts human-level AGI within a few years and is very worried about what will happen thereafter, he also expects a hard takeoff following human-level AGI.
- Ben Goertzel:
- "The Singularity Institute's Scary Idea (and Why I Don't Buy It)" (2010): "I think a hard takeoff is possible, though I don't know how to estimate the odds of one occurring with any high confidence. I think it's very unlikely to occur until we have an AGI system that has very obviously demonstrated general intelligence at the level of a highly intelligent human."
- "The Hard Takeoff Hypothesis" (2011): "the mindspace consisting of instances of the OpenCog AGI architecture [...] very likely possesses the needed properties to enable hard takeoff."
- "Semihard Takeoff" (2014): "Richard Loosemore and I have argued that an Intelligence Explosion is probable. But this doesn’t mean a Hard Takeoff is probable. [...] In spite of being a huge optimist about the power and future of AGI, I actually tend to agree with the anti-Foom arguments. A hard AGI takeoff in 5 minutes seems pretty unlikely to me. What I think is far more likely is an Intelligence Explosion manifested as a 'semi-hard takeoff'—where an AGI takes a few years to get from slightly subhuman level general intelligence to massively superhuman intelligence, and involved various human beings, systems and institutions in the process."
- Bill Gates: Not much disagreement with Musk or Bostrom. Agrees that AGI will become extremely powerful very quickly once we figure it out. (Gates discusses AI timelines a bit more here, but he doesn't comment much on takeoff speed.)
- Brian Tomasik
- Dean S Horak: "I'm in the camp that believes AGI will arrive incrementally and become more and more integrated into humans lives via augmentation (external devices , wearables, implants and prosthetics) so that when it ultimately reaches the level of current human intelligence it will be part of us so that it will not be seen as separate from us."
- Eliezer Yudkowsky
- Elon Musk
- Eric Baum: "Baum predicts a slow takeoff"
- J. Storrs Hall
- Mark Waser: Told me by private email where he should go on the graph.
- Matt Mahoney: "Machines will eventually automate everything else we get paid to do, but only at a cost of $1 quadrillion, or decades of global effort. There is no shortcut, like recursive self improvement or a universal learning algorithm. We know this mathematically."
- Max Tegmark: "Tegmark suggests in his book a takeoff of 'hours or seconds'."
- Nick Bostrom
- Monica Anderson: "I believe human-or-lower level 'Understanding Machines' are easy and imminent whereas seriously superhuman logic-based godlike AI is impossible."
- Nikola Danaylov: "the more likely path, for me, would be a soft takeoff"
- Paul Christiano: "I think it is very unlikely that there will be rapid, discontinuous, and unanticipated developments in AI that catapult it to superhuman levels"
- Peter McCluskey: See the section "Evaluating the evidence" in the linked article
- Peter Rothman
- Ramez Naam: "The Singularity Is Further Than It Appears" and "Why AIs Won't Ascend in the Blink of an Eye - Some Math"
- Richard Loosemore: His and Goertzel's "Why an Intelligence Explosion is Probable" explains: "the conclusion of our relatively detailed analysis of Sandberg’s objections is that there is currently no good reason to believe that once a human-level AGI capable of understanding its own design is achieved, an intelligence explosion will fail to ensue." This itself doesn't speak to the speed of the takeoff to superintelligence, but Loosemore said in conversation that he expects the process to be rapid. In 2019, Loosemore wrote to me to suggest that his position on the takeoff-speed axis should be about the same as Goertzel's, so I adjusted his location.
- Robin Hanson: "I Still Don’t Get Foom" and "This Time Isn’t Different"
- Roman V. Yampolskiy: expects a soft takeoff up to human-level AGI but a hard takeoff thereafter
- Sam Altman: Endorses Bostrom's Superintelligence book. He believes fast takeoff is a serious possibility but seems uncertain on whether the speed will actually be fast or slow: "Development progress may look relatively slow and then all of a sudden go vertical—things could get out of control very quickly (it also may be more gradual and we may barely perceive it happening)."
- William Hertling: "In sum, Ramez brings up great points, and he may very well be correct: the singularity might be a hundred years off instead of twenty or thirty. [...] On the other hand, if we can agree that the singularity is a possibility, even just a modest possibility, then we may spur more discussion and investment into the safety and ethics of AGI." William confirmed by private email that his hard/soft position on the graph is accurate.
Sources for years worked in software
Following are some rough estimates of how much time the forecasters have spent working on commercial software, although these numbers are bound to ignore relevant distinctions and may omit important information that I couldn't find from a cursory investigation. Let me know if you have corrections or suggested additions. I didn't include Vernor Vinge because I wasn't sure whether or how long he worked on commercial software, but I suspect he might be an outlier relative to the trend in the graph, especially if his academic computer-science work is counted.
- Ben Goertzel: Over a decade leading AGI-related companies (if not always programming for them).
- Bill Gates: His full-time work at Microsoft extended from roughly 1975 to 2008 (=33 years). Of course, much of his attention was directed toward business rather than in-the-trenches product development. To compensate for this, I arbitrarily put Gates at around ~20 years of experience.
- Brian Tomasik: Four years as a software engineer at Microsoft.
- Dean S Horak: 37 years.
- Eliezer Yudkowsky: Spent about 2 years building a commodities-trading program and half a year planning for a startup. Yudkowsky also helped design the Flare programming language, which demonstrates a reasonable degree of sophistication with software.
- Elon Musk: Spent a few years founding the Zip2 web company and later X.com. I'm not sure how much of this time was devoted to software development in particular and how much to business/management. (It's unclear whether software management rather than just development should count. Arguably it should, because the main signal I'm trying to capture with my "years of commercial software experience" number is understanding of complexity, timelines, and practicalities of implementing software systems.)
- Eric Baum: Had worked as a research scientist outside of a university for about 15 years as of the publication of his 2004 What is Thought? (which I think was the source of McCluskey's assessment of Baum as a soft-takeoff adherent). However, it looks like most of that work was pretty academic, so I reduced the number of years somewhat.
- J. Storrs Hall: I'm counting "1980-84, Systems Programmer, Laboratory for Computer Science Research, Rutgers University" and "1985-97, Computer Systems Architect, Laboratory for Computer Science Research, Rutgers University", since although these weren't strictly for industry, they do appear to have been non-academic computer work.
- Mark Waser: "Mark Waser has over 30 years of experience in software systems design & development and artificial intelligence." (His LinkedIn page has details.) Mark confirmed 31 years in a private email.
- Matt Mahoney: "I've been getting paid for writing software for 30 years."
- Max Tegmark: "While still in high-school, Max wrote, and sold commercially, together with school buddy Magnus Bodin, a word processor written in pure machine code for the Swedish eight-bit computer ABC 80, and the 3D Tetris-like game Frac."
- Monica Anderson: "I've worked on commercial software - mostly NLP and AI related - since about 1980 an have spent over a decade on pure AI research doing experimental programming."
- Nick Bostrom: Bostrom has studied AI and computational neuroscience in school, but it doesn't appear that he has worked on any industry-scale software projects.
- Nikola Danaylov: No commercial software experience as far as I can tell.
- Paul Christiano: Still a grad student.
- Peter McCluskey: Not sure about career history, but he has worked on many concrete software projects.
- Peter Rothman: "I've been in the software business for 30 years."
- Ramez Naam: "Ramez spent 13 years at Microsoft".
- Richard Loosemore: "I have [28] years experience writing software professionally, and a good deal of that working on AGI". In 2019, Loosemore wrote to me to say that he had remembered that his first commercial software was actually 32 years ago as of 2014, not 28 years, and I updated the graph accordingly.
- Robin Hanson: I don't know whether Hanson's work at Lockheed Artificial Intelligence Center (1984-1989) was mainly academic or also applied. I'm counting at least this as commercial experience: "Xanadu Inc., consultant on hypertext publishing design, 1988-1991".
- Roman Yampolskiy: Has worked mainly in academia, with a sprinkle of non-academic programming.
- Sam Altman: As of mid-2015, Altman is 30. He dropped out of Stanford at age 23. This would naively imply 7 years of industry experience, but Altman has been coding his whole life, so I added a few more years to his point on the chart.
- William Hertling: William told me by private email: "I've been working in software since 1995. I spent eight years from 2003 to 2011 managing software projects, and the rest as a software developer, first doing PC-based C/C++ application development, and since 2011 doing Ruby/Javascript web service development."
Adding more data points
Feel free to write and suggest additional data points to add to the graph, possibly including yourself. This chart is not designed for statistical validity but instead for transparent presentation of many anecdotes. One nice feature of labeling people in a graph is that readers can judge for themselves which data points they trust most and which ones they find uncredible. If I were computing statistics on the data (which I'm not), I'd need to be more selective about who was included.
Two conflicting camps
All generalizations are false, but there seem to be roughly two opposed camps on AGI amoung futurists. The following table summarizes properties that tend to cluster together, although items in a given column are certainly not equivalent:
MIRI and FHI | Humanity+ and other transhumanist organizations |
Relatively high probability on hard takeoff | Relatively high probability on soft takeoff |
AGI has high probability of harming existing humans | AGI has lower probability of harming existing humans |
Most impressed by the power of elegant math | Most impressed by the power of messy, complex systems |
Generally younger (mostly in 20s and 30s) | Somewhat older (relatively more people in 40s and 50s) |
The groups share one thing in common: They both often believe that the other side is very misguided about the nature of AGI and therefore isn't producing useful contributions to the field. Sometimes one side thinks the other is causing active harm (MIRI fears speeding up AGI, and regular transhumanists fear demonizing and alienating AGI researchers).
I personally think both sides get some things right but that each side has its blind spots. There are very smart people in each camp, and they would be better served by breaking down barriers and updating somewhat in each other's directions.