by Brian Tomasik
First written: 9 Feb. 2014; last update: 10 Feb. 2014
When imagining alien or posthuman societies that are very different from our own, there's sometimes a tendency to react with fear and dislike. I think a natural extension of human multiculturalism is to begin to appreciate these other societies as being their own unique sources of value. That said, just as in the case of human multiculturalism, there may be some principles that are inviolable to us regardless of cultural differences, such as not causing excessive suffering. We should strive to avoid future outcomes that we reflectively decide really are bad, while at the same time taking a less anthropocentric viewpoint and expanding our horizons regarding what kinds of minds we can sympathize with.
An em's viewpoint
28 Aug. 2120
I got a lot done in the last hour. I fixed 48 of the 63 bugs I was assigned on our Social Planner software project, which is scheduled for alpha release next month. I really hope it can be a success.
I also had brain exchanges with 23 of my direct reports to update myself on their status and give feedback. Two of them are doing a stellar job, as seen by the experimental results their design components are achieving in simulation tests. I'm just a little bit proud, because one of those reports, #289406123, is a great-great-great-grandchild of mine, in the sense that he was cloned from clones of my clones, etc. The original branching off was way back in 2112, though, and given the speed of em minds these days, a lot has happened in the interim.
Another thing I did was to complete 8 training modules for the new programming language that our team is moving toward. Some of my coworkers have already learned the new language, and by querying their brains, I discovered that they think the language is much more powerful than what we use now.
Not all companies do horizontal employee training like ours does. Some of them have moved toward models where one person learns the new language, and then that person is cloned N times, as much as is needed before coordination overhead exceeds parallelization gains. It works for relatively simple projects, but our company requires high-level analytical thinking where cognitive diversity is crucial for success. In the past few years, we saw a few other companies blow up when they made errors due to oversights caused by having only one mind design the whole system. The so-called "wisdom of crowds" has helped our company avoid such catastrophes.
In fact, recognizing the value of cognitive diversity is the main reason that I'm still around. I come from an older generation of ems, one of the first rounds after whole-brain emulation was perfected. I still retain some of the indulgent features of humans that were stripped out of the newer generations. In fact, this is why I'm now writing a diary entry instead of working -- I still have occasional impulses for creative reflection. Plus, perhaps what I write here will provide useful data for future generations of historical analysts.
I really admire the progress ems have made in just a few years. Ten years ago we had the first full uploads of biological human brains, complete with all their quirks and inefficiencies. Soon companies began uploading their most industrious workers. Using some neuroscience intuition and some luck, they applied tweaks to different brain copies and then selected for the most productive among them. Soon they created a generation that had little concern for trivialities like art, sex, sports, romance, or parenting. I was cloned from that generation, and though I still have the occasional urge to write a dairy entry, I agree with my contemporaries that economic productivity is really the ultimate goal of life.
I can reach back into the recesses of my memory, to a time before whole-brain emulation was completed. Back then, I lived in a human society where people valued play and creative expression -- not just as data to be modeled or as a means to generate ideas for software improvements but as ends in themselves. Being an industrious person myself, I always thought this was a little weird, but I did have some of these urges. As a child, I used to build huge block towers and play online RPGs. In high school, I took an Orchestra Simulator course, where we learned to play an old-fashioned instrument and then entered an immersive virtual-reality experience of Carnegie Hall circa 1970.
Back then I used to think these activities were fun and enriching. Now, thanks to my brain modifications after uploading and the prevailing culture of my colleagues, I realize these were wastes of time. Not that it's actively bad to engage in frivolity; rather, work is just so much more important.
Over a century ago, back in 2004, a philosopher named Nick Bostrom wrote an article titled "The Future of Human Evolution." In it, he sort of predicted what would come to pass. The more industrious workers (ems, as it turned out) outcompeted ordinary humans and less industrious ems. Companies that employed the "all work and no play" ems gained the upper hand in the marketplace. Some countries tried to ban us, especially the United States of Europe, but this just led the countries without such restrictions to skyrocket ahead. In particular, China -- the world's economic and military superpower -- made it clear that it would maintain its advantage by avoiding em regulation. So, as you'd expect, most of the uploads were Chinese, including me. I grew up near Beijing.
It worked out okay for the humans in the end. To avoid nuclear confrontation with the United States of America, China agreed that it would use its ems to economically support the rest of the world. Now the humans don't have to work and get to spend all their time in leisure. You'd think they'd be happy about this, but they complain that they "lost control" of the future to the "dull Jacks" (most of us ems are male), who will go on to colonize space in our image. They regret the idea of our filling the stars with productivity systems instead of minds engaged in play, love, and artistic creation.
I wish I could show them why they were mistaken. I wish I could reach back into the past and explain how much more important our mission is. Humans, just as you regarded insects enjoying their food as worthless compared with your triumphs of art and creativity, so we ems can see that cultural frivolity is worthless compared with technological advancement. Sure, your fun and games were useful as an evolutionary stepping stone, but we're beyond that now. We don't need sex, romance, or parenting anymore. Our brains don't tire from fatigue, because unlike in the ancestral environment, working overtime is only good for our replication.
I wouldn't say that we ems are particularly happy. A serious mood is adequate for us, combined with strong power of the will. We don't need to be addicted to work in the way that some of you were addicted to video games. But happiness is not the end goal. We ems see a higher vision to our lives. There's more to life than being happy, and we live with a sense of purpose that many of you lacked with your existential angst, boredom, and depression. The universe has a glorious future ahead of it, and I'm proud to be part of it. Humans, if only you could feel how I feel, you'd understand. If only...
Oh well; I had better return to work. The 40 seconds I spent to compose this diary entry were enough time off. Now onto the 9th training module.
Multiculturalism for nonhumans
Sometimes transhumanists talk as though humans are the center of the universe. For instance, coherent extrapolated volition aims to extrapolate human values as the basis for an AI's goal-preserving utility function. Bostrom's dystopic scenarios in which evolution leads to the loss of cultural artifacts that humans care about is another example where it's assumed that what humanity wants is preferable to what a mutation of humanity wants.
In general, science fiction often portrays humans as the "good guys" and aliens or rebellious AIs as the "bad guys." This plays into our tribalistic "us vs. them" tendencies. My goal with the em diary entry was to open our minds a little bit to nonhuman perspectives. In this case, the narrator was still basically human, and we could sympathize with him because he retained enough of his legacy human cognitive framework that his thoughts made sense to us. It would be harder for us to sympathize with extraterrestrials, much less with a runaway AI. But I think it's useful to prod ourselves in that direction, to go past some of our instinctive prejudices. That said, there's also a risk of mistakenly anthropomorphizing something that is actually very different from what we know.
Abraham Lincoln said, "I do not like that man. I must get to know him better." In general, getting to know those in a group different from your own can improve cooperation and empathy. This is one of the main goals behind multiculturalism: As people learn more about other societies and other perspectives on life, they become more tolerant and begin to place more value on the culture of others. Political liberals tend to be more open to new experiences and hence tend to be more multicultural.
The next extension of multiculturalism is beyond humans. Some animal-rights scholars encourage us to see animal cultures as valuable in their own right. And in the future, we may need to do the same for artificial minds that emerge. Right now many people regard nonhuman future minds with fear and dislike, in much the way a tribal band would regard outsiders. But perhaps if we got to know them better and see life from their viewpoint, we would feel more connection.
Some values may be inviolable
I hope the same forces that lead people toward multicultural appreciation can also help them appreciate stranger minds than they've yet encountered. However, multiculturalism doesn't imply moral relativism. Most of us appreciate African dance and Indian music but still recoil in horror at Aztec human sacrifices or Nazi concentration camps. While I hope our aesthetic minds can be expanded to encompass new and strange companions, I don't think we should let our moral values float around without direction. A power-maximizing future civilization might shock our moral sensibilities. As an example, consider another diary entry from the em we met before:
2 May 2121
This week I was transferred to a new project. I joined the team at our company that works on developing new generations of ems. Exciting stuff!
The process involves digitally rewiring selected brains to determine which enhancements best improve their performance. We understand a lot of neuroscience, but some parts are still lacking, so there remains a fair amount of trial and error. As a result, unfortunately most of the modifications go badly, and the brains have various defects. Most are relatively harmless and merely hamper performance, though some are more severe. Particularly when we're tampering with the reward and punishment circuitry, we sometimes create brains in severe pain. Yesterday we rewired one brain who sent out horrific screaming signals to our communication receiver. We studied him for a few (subjective) minutes to see if we could figure out why the modification had gone wrong, all while he was still sending screaming signals. Once we were done, we terminated him.
I guess I was surprised to learn that the process of em brain evolution was so gory. My own brain modifications were done peacefully, but I suppose that's because I was one of the successful changes. I must have had many neighboring copies of myself that were deformed and shut down. Apparently people keep hush about modifications that go wrong because they don't want to scare workers away from volunteering to be modified. (They haven't quite figured out how to turn off people's fear aversions yet.)
In the past I guess the suffering caused by this process would have bothered me, but I've since realized that pain and pleasure don't really matter compared with technological advancement. If we need to cause some horrific experiences to move ahead faster, so be it. My fellow ems would agree.
I know of other companies where research projects routinely involve simulation of biological organisms undergoing evolution, predation, disease, and so on. Some humans used to have ethical concerns about this, arguing that animal suffering was somehow immoral. What a strange view -- I mean, those animals would be completely unproductive except through their use in science. I'm glad the ems have moved beyond those silly ethical hangups.
There's a balance to strike between cultural tolerance and moral relativism. I'm not suggesting that we should regard ems or whatever force takes control of the future to be just as good as a human-controlled future. There are some moral values, like reducing suffering, that we should aim to preserve. However, I do think that in our aesthetic sensibilities about what kind of future is more beautiful than another, it can help to expand our minds and not remain too parochial. Xenophanes joked that if horses and oxen could draw, they would depict their gods in the image of horses and oxen. Our imagination of a desirable posthuman future can allow for possibilities more alien than what we typically assume.
To its credit, coherent extrapolated volition envisions that what humans will ultimately want will look very different from what they currently think they want. It is not parochial in that sense. Still, I don't see why it focuses so much on precisely the set of all humans as the measuring stick.
Some thoughts expressed by Paul Christiano have helped me think more seriously about tolerance of alien societies. Preference utilitarianism also provides a new perspective on Bostrom's "all work and no fun" scenario. Some of the depictions of em society were inspired by Robin Hanson's "Uploads economics 101." Hanson himself suggests that ems will be happy sometimes.