Predictions of AGI Takeoff Speed vs. Years Worked in Commercial Software

By Brian Tomasik

First published: . Last nontrivial update: .

This page features a rough depiction of different views on the question of whether artificial general intelligence (AGI) will take off in a "hard" way (fast, no time for response or competition) or a "slow" way (more gradual, more time to integrate with society, possibility of competing projects). I plot these views against a very crude estimate of how long each forecaster has worked on commercial software (not counting academic computer science) as of 2014, when this graph was first created.

Contents

Caveats

Further research

It would be interesting to slice these predictions along many other dimensions as well. For instance:

It would furthermore be helpful to gather statistically valid data from surveys of AI experts. My graph here is just something I put together in a few hours based on what I already knew offhand. It's thus not particularly trustworthy.

Comparison with overall expert predictions

A more comprehensive collection of AI predictions is reported in "Future Progress in Artificial Intelligence: A Survey of Expert Opinion". One question asked for participants' beliefs regarding a hard takeoff:

Assume for the purpose of this question that [human-level machine intelligence] will at some point exist. How likely do you then think it is that within (2 years / 30 years) thereafter there will be machine intelligence that greatly surpasses the performance of every human in most professions?

The median probability for "2 years" was 10% and for "30 years" was 75%. The mean probability for "2 years" was 19% plus or minus a standard deviation of 24%. If the data were distributed normally (which they probably weren't), this would imply about 15.9% of participants estimating a probability more than 43% (= 19% + 24% = one standard deviation above the mean). This breakdown largely aligns with what we see in my graph or may be even more biased toward soft takeoffs than the experts in my graph are.

Data used in the graph

Sources for views on takeoff speed

The views of several of the people in the above chart are described in "Hard vs. soft takeoff" on Wikipedia and in my "Thoughts on Robots, AI, and Intelligence Explosion". For further discussion of takeoff speeds in general, see also Michael Anissimov's "Hard Takeoff Sources".

Following are links to sources that describe the views of each person, possibly with quotes from those sources. Note that the most relevant question where takeoff speed is concerned is how quickly human-level AGIs would advance to a super-human level, rather than how quickly humanity reaches human-level AGI. (Nick Bostrom emphasizes the importance of this distinction in Superintelligence, since he believes that human-level AGI may require up to a century but that super-human AGI would probably follow shortly thereafter.) That said, in the absence of further information, I assumed in my chart that because Elon Musk predicts human-level AGI within a few years and is very worried about what will happen thereafter, he also expects a hard takeoff following human-level AGI.

Sources for years worked in software

Following are some rough estimates of how much time the forecasters have spent working on commercial software, although these numbers are bound to ignore relevant distinctions and may omit important information that I couldn't find from a cursory investigation. Let me know if you have corrections or suggested additions. I didn't include Vernor Vinge because I wasn't sure whether or how long he worked on commercial software, but I suspect he might be an outlier relative to the trend in the graph, especially if his academic computer-science work is counted.

Adding more data points

Feel free to write and suggest additional data points to add to the graph, possibly including yourself. This chart is not designed for statistical validity but instead for transparent presentation of many anecdotes. One nice feature of labeling people in a graph is that readers can judge for themselves which data points they trust most and which ones they find uncredible. If I were computing statistics on the data (which I'm not), I'd need to be more selective about who was included.

Two conflicting camps

All generalizations are false, but there seem to be roughly two opposed camps on AGI amoung futurists. The following table summarizes properties that tend to cluster together, although items in a given column are certainly not equivalent:

MIRI and FHI Humanity+ and other transhumanist organizations
Relatively high probability on hard takeoff Relatively high probability on soft takeoff
AGI has high probability of harming existing humans AGI has lower probability of harming existing humans
Most impressed by the power of elegant math Most impressed by the power of messy, complex systems
Generally younger (mostly in 20s and 30s) Somewhat older (relatively more people in 40s and 50s)

The groups share one thing in common: They both often believe that the other side is very misguided about the nature of AGI and therefore isn't producing useful contributions to the field. Sometimes one side thinks the other is causing active harm (MIRI fears speeding up AGI, and regular transhumanists fear demonizing and alienating AGI researchers).

I personally think both sides get some things right but that each side has its blind spots. There are very smart people in each camp, and they would be better served by breaking down barriers and updating somewhat in each other's directions.