The Future of Darwinism

By Brian Tomasik

First written: 3 Nov 2013. Last nontrivial update: 2 Sep 2017.

Summary

Darwinian competition is a convergent development in many physical, biological, and social systems, but there's also often a trend toward cooperation, which we can see, among other places, in multicellular organisms and human societies. Whether the future will be determined by Darwinism or the deliberate decisions of a unified governing structure remains unclear. While Darwinism was ultimately the cause of all the world's suffering, an end to Darwinism at the macro scale would unfortunately probably not end suffering.

Contents

Epigraph

"Competition has been shown to be useful up to a certain point and no further, but cooperation, which is the thing we must strive for today, begins where competition leaves off." --Franklin D. Roosevelt

Darwinian life

The laws of physics in our universe cause various physical operations to happen—chemical reactions, radiation, gravitational attraction, and so on. Sometimes a physical process has positive feedback and causes more processes like it to occur.
For example, when cosmic dust combines into larger planetesimals, the greater masses of those bodies attract further material, leading to protoplanets and finally planets. "Snowballing" processes like this occur in many forms in nature.

On the early Earth, some physical processes led to more physical processes of the same type. This reproductive dynamic led to what we call "life." Processes that tend to create more of themselves, if they can occur, will keep occurring and eventually occupy whatever space they have. This is why we see life in every niche that nature makes available, except during periods of instability where populations may temporarily fall below carrying capacity.

These physical processes, i.e. organisms, sometimes change form, and changed versions may be able to copy themselves more efficiently than the originals. In these cases, the changed versions become more numerous. If there's limited space, energy, or other inputs to replication, those organisms that can more effectively take what they need to make copies will end up being more prevalent. This is the process of Darwinism—competition among self-replicators that chooses some to survive and others to die.

In some cases, organisms mutate in ways that allow them not just to steal resources from others but even to wholly consume others, which we call predation. Likewise, some organisms learn to sap the resources of others through parasitism. Of course, not every organism can be a predator or parasite because prey/hosts are limited. Most organisms remain victims of predation and parasitism because, except in cases where the parasites or predators completely extinguish their prey/hosts, there's still opportunity to replicate as a member lower on the food chain, even if your life's prospects aren't necessarily cheerful.

Learning and consciousness

Over time, some organisms became increasingly complex as a way to better create copies. Learning began to emerge as a means for an organism, within its lifetime, to adapt its behavior to environmental information, rather than merely following a fixed genetic script and relying on evolution to do the adapting. Learning was thus a way to short-circuit evolution and produce more intelligent behaviors than would be possible by following simple gene-programmed rules. With learning came pain—updates to mental state variables to induce motivation to avoid similar conditions in the future.

Many environmental inputs fed into organisms' reproductive fitness, so learning needed to account for multiple relevant factors: Seeking warmth, food, and sex, while avoiding injury, predators, and deleterious conditions. Organisms needed to make motivational tradeoffs—e.g., tolerate temporary cold in order to go out of a burrow and get food. For sophisticated tradeoffs, this process required the ability to combine information in potentially novel ways. Rather than relying purely on genetic hard-wiring or past experience, organisms began to make predictions about new outcomes they had not experienced, e.g., if I fall off this cliff, I'll get severely injured, and that would likely hurt, so I should avoid it.

These world simulations of "what would happen if ...?" were aided by increasingly abstract abilities to hold beliefs, update them based on evidence, and combine them together. These complex mental abilities made organisms more "conscious", and with consciousness came the ability to consciously suffer. When negative events happened, organisms' minds would flood with a recognition of how bad they were feeling and ideas for how to change the situation. They could even experience fear and pain merely by contemplating future outcomes, which allowed them to short-circuit actually undergoing detrimental experiences.

Conscious beliefs and ideas could be shared via communication and, in humans, via fully open-ended language. Among other things, this allowed for explicit trade, negotiation, coalitions, and knowledge transfer. Organisms who engaged in reciprocity with neighbors performed well on iterated Prisoner's dilemmas and tended to outcompete their completely selfish neighbors. Communication and cooperation allowed for the formation of tribes, city governments, and eventually nations, as well as other collectives and corporations.

Non-Darwinian organization

The early days of life demonstrated Darwinism in full force: It was every bacterium for himself, with no central organization. But as time went on, multicellular organisms emerged, in which many cells worked together toward a common purpose under central command. The prevailing theory for how multicellular organisms evolved postulates that members of the same unicellular species banded together in a cooperative way to reproduce as a group, such as the Dictyostelium does during food shortages. We see similar collectivism in ant/bee colonies. Usually ant colonies are held together by kin altruism, though in some cases, there are ant supercolonies where workers help any queen, not just the queen with whom they have shared DNA. The Major Transitions in Evolution offers a unified account of this growth of organizational complexity in the history of biology.

In some ways, nation states are to humans as humans are to their cells, though there are differences as well. For example, human societies involve Darwinian conflict as subcomponents of the larger unit—in the realms of market competition, political contests, status seeking, etc. Internal conflicts within states drive the policies that the state adopts, while cells within an organism dutifully execute commands encoded in their shared DNA. That said, there may indeed be Darwinian competition within some parts of an organism too, such as has been suggested for neurons seeking to establish connections.

A natural question is whether this process of coming together under a central authority for mutual benefit will continue to its logical conclusion: A world government. Already we have global institutions, like the UN, WTO, multilateral agreements, and so on. Nick Bostrom believes "it is more likely true than not" that Earth-based life will eventually form what he calls a "singleton," i.e., "a single decision-making agency at the highest level." If this doesn't happen through human-designed social arrangements, it might still be the result of the first general artificial intelligence (AI), which could take control of the whole world in a way that no single organism has ever been able to do unilaterally before. Of course, competition among AIs is another possibility.

Intelligent design

Non-Darwinian behavior can arise mindlessly, as in the case of multicellular organisms, but it can also arise intelligently. Sometimes intelligence is a way of pre-computing the Darwinian competition in the realm of ideas rather than the real world. This is true when you imagine a bunch of actions and select the one that you predict would have the most positive results. In other cases, intelligence literally allows for bypassing this approach of "try a bunch of things and select the best" altogether. For example, to optimize the function f(x) = 2 + x - x2, you don't need to try a bunch of x values and pick the highest; you can just differentiate, set equal to zero, and solve the equation. In general, logic can provide analytic insight without any Darwinian component.

Age of spandrels

Evolution is still continuing among humans today, but it's slow. Genetic change is not a primary driver of the major developments we see happening in our world. Much more powerful are technology and culture. Scientific ideas, art, music, literature, social norms, and other memes are mostly not tools that help some people survive better than others; rather, they're approximately spandrels from the standpoint of human evolution (with some exceptions, like the stances various religions take against birth control). On the other hand, there is a Darwinian process going on among the memes themselves in competing for human attention. Rather than humans adapting themselves to their environment, the memes are adapting themselves to human brains. The memetic fitness landscape includes humans' aesthetic tastes, hedonic rewards, desires for accurate beliefs, and other psychological traits.

We might think of present times as an "age of spandrels" in the sense that most of what's going on is a byproduct of bequeathed human tendencies playing themselves out. More than at most times in life's history, the future trajectory of Earth seems to depend on random quirks of how people feel and act rather than purely being a convergent outcome of selection pressure (unless we're talking about selection pressure among the memes). Robin Hanson uses the phrase "dream time" to describe this era in which human impulses can be so non-optimized by evolution and yet have such a significant impact on the future. It remains to be seen whether Darwinism returns (Bostrom 2004), as we hit resource limits, or whether intelligent design of the future under a central organization allows for perpetuation of the quirks that current humans express. If Hanson's Malthusian scenario is right, I think our humane values of concern for the welfare of the powerless will likely fade away—being an anomaly within vast stretches of egoism that existed before and will exist after the reign of higher vertebrate animals.

Implications for suffering

If Darwinism has historically caused so much suffering on Earth, would a non-Darwinian future be better? Maybe, since costly competition would be reduced, allowing for more positive-sum exchanges. That said, the question isn't obvious—it depends on the contents of the design that's created and potentially spread into space. For example, future people might run simulations of Darwinian life, such as occurred in Earth's past, thereby perpetuating Darwinian misery on an astronomical scale. Or they might compute algorithms that, while not Darwinian, still experience negative reinforcement for learning purposes and hence still suffer. Because suffering seems to be an important part of rapid learning, we might expect future minds to suffer even if they're not under competitive pressure. That said, the imperative to learn optimally could be reduced absent competition. On the whole, I would be more sanguine about a future without Darwinism, though I realize it would still be far from ideal in many ways.

Will altruism be selected for or against?

Weinersmith (2016) is an SMBC comic that envisions interactions among self-driving cars having different moral views, such as deontology and utilitarianism. A "Nietzschean tractor-trailer speeds through" with the motto: "What is good?! All that heightens the feeling of power!" It crashes into the other cars and obliterates them because it's bigger. Weinersmith (2016) explains: "In time, all gentler ethical systems are extinguished among autonomous vehicles." This scenario is similar to those in Bostrom (2004) and Alexander (2014) where unconstrained evolutionary pressures lead to outcomes orthogonal or even antithetical to moral value.

Reciprocal altruism among powerful agents is likely to persist and intensify in the future, even in competitive scenarios, because cooperation helps agents be more successful. But it's unclear whether compassion for the powerless, such as non-human animals or "suffering subroutines" of the future, will remain.

Christiano (2013) provides an interesting argument against the idea that pure, non-reciprocal altruism will be exterminated by selection for power-maximizing agents. He argues:

In the world of today, it may seem that humans are essentially driven by self-interest, that this self-interest was a necessary product of evolution, that good deeds are principally pursued instrumentally in service of self-interest, and that altruism only exists at all because it is too hard for humans to maintain a believable sociopathic facade.

If we take this situation and project it towards a future in which evolution has had more time to run its course, creating automations and organizations less and less constrained by folk morality, we may anticipate an outcome in which natural selection has stripped away all empathy in favor of self-interest and effective manipulation. [...]

But evolution itself does not actually seem to favor self-interest at all. No matter what your values, if you care about the future you are incentivized to survive, to acquire resources for yourself and your descendants, to defend yourself from predation, etc. etc. If I care about filling the universe with happy people and you care about filling the universe with copies of yourself, I’m not going to set out by trying to make people happy while allowing you and your descendants to expand throughout the universe unchecked. Instead, I will pursue a similar strategy of resource acquisition (or coordinate with others to stop your expansion), to ensure that I maintain a reasonable share of the available resources which I can eventually spend to help shape a world I consider value[able].

As Christiano (2013) notes, Shulman (2012) makes a similar argument in the context of space colonization.

In other words, compassion for the weak may not be completely extinguished because at least some compassionate agents will fight just as hard as selfish agents to maintain power. Christiano (2013) points out that one reason we've historically seen power-maximizing agents outcompete more compassionate agents is that most evolved creatures execute a given strategy focused on the short term. Those who want to help others tend to help others in short-term ways, while those who want to acquire power focus on that. It doesn't take long until the agents who happen to prefer power maximization are in control. However, if agents are sufficiently rational and consequentialist, they can recognize that long-run implementation of their values requires maintaining power, which will lead all such agents to converge on seeking power rather than spending most resources on short-term "consumption".

I think this argument makes a plausible case that pure altruism in the future may not be driven to zero, since at least far-future-focused, strategically minded altruists may remain competitive with other power seekers. That said, a good deal of society's current altruism may be stripped away as the spandrel emotions driving it are weeded out. Moreover, I expect that altruism would still remain a minority value because society is also full of strategic, consequentialist agents with less altruistic goals, including corporations and nation states.

Christiano (2013) argues there may be selection for those who care about very long-run outcomes:

In finance: if investors have different time preferences, those who are more patient will make higher returns and eventually accumulate much wealth. In demographics: if some people care more about the future, they may have more kids as a way to influence it, and therefore be overrepresented in future generations. In government: if some people care about what government looks like in 100 years, they will use their political influence to shape what the government looks like in 100 years rather than trying to win victories today.

What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.

Perhaps this point could argue that far-future-focused altruists will outcompete governments and corporations, which tend to have somewhat shorter time horizons? I'm skeptical, because while I can see how investors with longer time horizons will do better in the long run, it's less clear to me whether there are significant strategies by which far-future-focused consequentialists can outcompete consequentialists focused on the short term who are still very strategic. For example, corporations and wealthy people have strong influence over the halls of power, and I don't see how focusing on outcomes 1 million years from now would help much in displacing those forces.

Maybe one possible avenue is focus on artificial general intelligence (AGI). Perhaps those who try to influence AGI outcomes will have disproportionate influence over an AGI-dominated future. But even here, I doubt the advantage of far-future focus is that big, because rich egoists of today will invest in AGI companies, will seek to influence AGI values, and so on. Thus, much of the power that AGI companies acquire in the future should continue to enrich and empower existing elites or their descendants to a significant degree. The US military, for example, takes active interest in AGI and is quite strategic about long-run emerging technologies. (Perhaps the US military qualifies as altruistic according to the narrow definition of "not focused on a single person", but it's often not a compassionate altruistic force.)

Moreover, some egoists may care a lot about long-run outcomes if they expect that they or their children will be able to live forever. If your potential lifespan is measured in billions of years, then you'll be equally concerned as far-future-focused altruists with events millions of years from now. So power-seeking egoist consequentialists seem unlikely to be eliminated in favor of far-future-focused altruists.

In general, I'm wary of updating too strongly on theoretical arguments for any particular dramatic shift in the way the world works relative to what we see at present (Tomasik 2017). This is not because large shifts in society won't happen but because theoretical arguments can be flimsy, and it's often easy to devise convincing arguments for two different, contradictory conclusions. So my prior would be to expect that the future distribution of egoist vs. altruist forces in the long-run future might look something like the distribution of such forces among present-day elites. Of course, some powerful people are fairly altruistic, such as Bill Gates or Bernie Sanders. And many are mildly altruistic but mostly self-serving.

Even if altruism remains a faction in the power struggles on Earth in the year 2500, I expect that the specific values of future agents will look very different from those of present agents, both because of some degree of evolutionary pressure (although, as we've seen, such pressure is weakened once agents become consequentialist reasoners) and because social dynamics shift in unpredictable ways even in the absence of evolutionary pressure. For example, the evolutionary pressure on human values in the present is relatively small, except in cases like Catholicism (which prohibits birth control but allows children) vs. Shakerism (which forbade procreation), yet we still see significant value drift over time. Some people will welcome value drift of the future as "moral progress", while others will be horrified by the loss of concern for things they presently cherish.

Acknowledgements

Phil Torres pointed out an inaccuracy in a previous version of this piece.