How Likely Is a Far-Future Utopia?

By Brian Tomasik

First published: 20 Dec 2017. Last nontrivial update: 20 Dec 2017.

Summary

Some starry-eyed transhumanists believe that if humanity can just survive the coming centuries without getting wiped out by artificial intelligence (AI) or other extinction risks, then our descendants will probably create a utopian civilization that embodies human values. This piece presents a few possible arguments to rain on that parade, although I haven't thought about this topic in great depth. My guess is that even conditional on human values of some sort retaining control over the far future, the probability of an outcome that deserves to be called a "utopia" is low (though of course this depends heavily on how broad one's definition of "utopia" is). When imagining far-future scenarios, we should focus on more realistic civilizational trajectories than utopias.

Contents

Conflicting notions of utopia

One tricky issue is whose vision of utopia we're talking about. Different humans have different values and would want different future civilizations. What counts as a utopia for some would be far from utopian for others.

Yudkowsky ("Coherent ...") discusses the possibility that different people's idealized values may fail to cohere. He suggests that if

"People coherently want to not be eaten by paperclip maximizers, but end up with a broad spectrum of individual and collective possibilities for which pizza toppings they prefer", we would normatively want a Friendly AI to prevent people from being eaten by paperclip maximizers but not mess around with which pizza toppings people end up eating in the Future.

However, many human values are not just "pizza toppings" but are in active conflict with one another. Here are just a few examples:

Yudkowsky ("Coherent ...") argues against letting perfect be the enemy of good:

Can there be no way to help a large group of people? This seems implausible. You could at least give the starving ones pizza with a kind of pizza topping they currently like. To the extent your philosophy claims "Oh noes even that is not helping because it's not perfectly coherent," you have picked the wrong construal of 'helping'.

That's fair enough, and it's ultimately a semantic dispute whether a compromise among conflicting human values—in which almost no parties get everything they want—would count as a "utopia".

Historical precedent

"Utopia" etymologically means "no-place", and that phrase is appropriate. With perhaps a few small-scale exceptions, human utopias have never existed on Earth so far. (And if we're also concerned about harm to animals, including invertebrates, then utopias have never existed on Earth since animal life arose. Even a peaceful human commune kills insects when gardening or walking in the woods.)

Part of the reason human utopias have been so rare is that current human brains are subject to the hedonic treadmill and will always suffer from something or other. But even if posthuman minds can be edited to suffer much less than present-day humans do, there would potentially remain conflicts.

Before the advent of democracy, people might have imagined that democracy would lead to utopia. After all, if people can control their leaders, then how can the leaders become tyrannical? But in practice, reality finds many ways to screw things up. Money and connections exert undue influence over politics, and those in power work to keep the system that way. Meanwhile, many voters are uninformed, irrational, or apathetic. Democracy is plausibly a better form of government than, as Winston Churchill said, "all those other forms that have been tried from time to time" (Langworth 2009), but it still leads to a society like the USA where, for example, hundreds of thousands of people are homeless while billionaires persuade the government to give them massive tax cuts.

Take another example: the Internet. Free exchange of information should lead to improved wisdom and understanding in society, right? To some extent that's true. But the Internet has also led to 4chan and other expressions of humanity's seedy underbelly, along with ideological echo chambers, easy distribution of malware, and more. Without social media, I doubt that Donald Trump would have won the 2016 US presidential election (though such counterfactuals are difficult to talk about because an Internet-less world would be so different than our own world).

The general trend that everything has flaws, and nothing in life lives up to idealized expectations, seems to me the strongest heuristic argument for doubting utopian futures. Extraordinary claims require extraordinary arguments, and the idea that the future will be utopian even though we've never seen a utopia before is an extraordinary claim.

Post-scarcity?

Many conflicts in our present world are caused by resource shortages. But it might be easy to satisfy people's basic needs in the far future. Would this allow for utopia to finally work, even though it hasn't worked historically?

A post-scarcity future might contain fewer squabbles than our present world does, but many disagreements are about things other than resource allocation. When severe problems like hunger and safety are taken care of, humans have a way of discovering new problems. Middle-class people in developed countries don't have significant resource scarcity, but they still find reasons to quarrel. People bully one another, make insults, get jealous, take offense, and insist that what others are doing is wrong. People take advantage of one another, and some enjoy exerting control over others. As is sometimes said, humans can be "nasty little fuckers", and human nature contains some dark impulses. Why would this change once we get our hands on powerful technology?

In addition, I don't think scarcity will ever be eliminated. Many value systems have unbounded appetites for resources. For example, hedonistic utilitarians would prefer to convert as much matter as possible into happy computations. In a comment on Yudkowsky ("Coherent ..."), Paul Christiano says that one could argue "that the pie is going to grow so much that this kind of conflict is a non-issue. I think that's true to the extent that people just want to live happy, normal lives. But many people have preferences over what happens in the world, not only about their own lives."

Even egoists can have insatiable appetites because they can create arbitrarily large numbers of copies of themselves, their kin, or other things they happen to fancy.

In general, we should expect those agents who are most hungry for resources to seek the most resources, multiplying themselves until resources are once again scarce. The main way to prevent this would be to have central-government restrictions on such behavior, but such restrictions wouldn't be considered utopian to some people.

Despotism and value drift

Even if humans figure out how to control artificial general intelligence (AGI), this doesn't mean that human ideals will control AGI. Lots of selfish, authoritarian, or ideologically radical actors are also very interested in controlling the future, and there's some chance they'll become the dominant power(s).

For example, imagine that the US military decides that the USA needs to maintain its superpower status. It would nationalize AGI and seek to bring the rest of the world under the dominion of the US government. While the US government has some humane impulses, it can also have authoritarian impulses, and in such a future, there might be little freedom and few resources granted to actors not supervised by the US military. There might be pressure for the rest of the world to conform to America's values and practices. This could inhibit expression by other cultures of the world, causing the future to be non-utopian from their perspectives. I don't think this particular failure mode is very likely, but trajectories like this, or even worse, certainly seem possible. Authoritarian rule has been the norm throughout many human civilizations and remains prevalent even today.

There's also a significant risk that society's values will drift over time, in perhaps dramatic ways. Some such shifts may be endorsed by present-day humans, since our posthuman descendants may be wiser and have access to better moral arguments than we do. But many forms of value drift, such as due to evolutionary pressures or just societal entropy, would move future civilization away from ideals that present-day humans would endorse on reflection. (Whether to count these "value drift" futures as human-controlled or not is a semantic dispute.)

What about stable compromise?

Maybe the best shot that a human-controlled future would have for something deserving the label of "utopia" would be a shared compromise arrangement among all of humanity. Such an agreement would not satisfy everyone. For example, libertarian values might inevitably be in conflict with some suffering-prevention or religious values. Ideological factionalization runs throughout human history. But perhaps civilization could reach a state that most competing camps would at least regard as not atrocious. (Some camps would still regard civilization as atrocious, such as people who give substantial moral concern to the suffering subroutines that would inevitably be run in massive numbers in a technologically advanced future.)

I think it's a pipe dream to imagine that a stable compromise arrangement would be reached by some sort of rational, deliberate process where the wishes of all of humans are given equal weight. Maybe the agreement would profess to do that on paper, but in practice, like in democracy, those with greater wealth and influence would have outsized control over the end result. Those who hold the most power at the time that AGI arrives will continue to hold the most power in the post-AGI world, with random fluctuations as fortunes rise and fall.

It's unclear whether a compromise arrangement would be stable. Democratic decisions in our present world are always in flux because the composition and goals of the cooperating parties are in flux. Goal preservation seems hard to achieve (Tomasik "Will ..."), but maybe it would happen eventually, given that it's easier to ensure goal alignment and compliance by engineered software systems than by evolved, selfish biological creatures. Maybe, within thousands or millions of years, we'll see a stable singleton implementing some sort of compromise arrangement between whatever factions held power at the time the singleton was formed. I find it doubtful though not impossible that the content or distribution of ideological values in control when that happens will look anything like what we see on Earth today.