Machine Sentience and Robot Rights

By Brian Tomasik

First written: Aug 2017. Last nontrivial update: 14 Dec 2017.

Introduction

In Aug. 2017, I was interviewed for my thoughts on machine sentience and robot rights for a Boston Globe article. This page contains my answers to the interview questions. The final article was "Robots need civil rights, too", and the paragraph that mentions me reads as follows:

Suffering is what concerns Brian Tomasik, a former software engineer who worked on machine learning before helping to start the Foundational Research Institute, whose goal is to reduce suffering in the world. Tomasik raises the possibility that AIs might be suffering because, as he put it in an e-mail, “some artificially intelligent agents learn how to act through simplified digital versions of ‘rewards’ and ‘punishments.’” This system, called reinforcement learning, offers algorithms an abstract “reward” when they make a correct observation [actually, "observation" should be changed to "action"]. It’s designed to emulate the reward system in animal brains, and could potentially lead to a scenario where a machine comes to life and suffers because it doesn’t get enough rewards. Its programmers would likely never realize the hurt they were causing.

Regarding the last sentence, I would say that the suffering of the reinforcement-learning agent would be visible to programmers if the programmers were philosophically sophisticated and held a certain view on consciousness according to which simple reinforcement-learning agents could be said to be suffering to a tiny degree. After all, the programmers would be able to see the agent's code and monitor what rewards or punishments the agent was receiving.

The rest of this page gives my full original remarks for the interview.

Contents

Machine consciousness

Consciousness is a notoriously puzzling philosophical topic. Philosophers of mind have wondered for centuries how mere matter can give rise to our inner feelings. My own view—shared by such thinkers as philosopher Daniel Dennett and artificial-intelligence pioneer Marvin Minsky—is that consciousness isn't a definite property that's either completely present or completely absent from a mind. Rather, "consciousness" is a complex, fuzzy concept that can refer to lots of different things that brains do.

In this way, "consciousness" is like "intelligence". There's no single point where a machine or living organism goes from being completely unintelligent to fully intelligent. Rather, intelligence comes in degrees based on the problem-solving abilities that an agent displays. For example, even a simple thermostat is marginally intelligent insofar as it can detect the room's temperature and act so as to keep the temperature roughly constant.

If "consciousness" is a similarly broad concept, then we can see degrees of consciousness in a variety of biological and artificial agents, depending on what kinds of abilities they possess and how complex they are. For example, a thermostat might be said to have an extremely tiny degree of consciousness insofar as it's "aware" of the room temperature and "takes actions" to achieve its "goal" of not letting the room get too hot or too cold. I use scare quotes here because words like "aware" and "goal" normally have implied anthropomorphic baggage that's almost entirely absent in the thermostat case. The thermostat is astronomically simpler than a human, and any attributions of consciousness to it should be seen as astronomically weaker than attributions of consciousness to a human.

As we add other abilities to a mind, the degree to which it falls under our conception of "consciousness" increases. Some examples:

  1. Planning. Some artificial intelligence systems have explicit goals and beliefs about the world, and they search for ways to change the world to achieve their goals. In other words, such agents have at least extremely crude "preferences" that they seek to fulfill. Planning algorithms are sometimes used, among other places, by non-player characters in video games.
  2. Reinforcement learning. Some artificially intelligent agents learn how to act through (simplified digital versions of) "rewards" and "punishments". The analogy is not just superficial, because there's actually a striking resemblance between certain forms of artificial reinforcement learning and the reward systems in animal brains (e.g., Glimcher 2011). Reinforcement learning is a staple of some of the AI advances in recent years, such as Google DeepMind's Atari-playing AI.
  3. Self-modeling. When we think about our own mental states, we do so by creating simplified mental representations of complex underlying processes. This is a "user illusion", similar to how the folder icons on a computer desktop are simple representations of underlying bytes on disk. AI systems might likewise create (extremely simple) models of themselves, such as when assessing their confidence in their own predictions. Even non-AI programs can show barebones forms of metacognition, such as when Windows Task Manager monitors CPU, memory, disk, and network usage by different applications.

To this list I would add many other properties or abilities that minds may display, such as perception, memory, non-reinforcement learning, behavioral repertoire, language, and so on. I also think the complexity of these traits is very important in our moral evaluations, because trivially simple instances of, say, planning or self-reflection seem to miss a lot of the detail that we intuitively care about when these processes occur in our own brains.

Analogy with insects

Simple animals such as insects can give some insight into the "mental lives" of some artificially intelligent agents, such as reinforcement-learning agents. For example, Huerta and Nowotny (2009) discuss a biologically realistic model of neural-network reinforcement learning by the insect brain, and they show that this network can perform a standard task from the field of machine learning. The general spirit of their network is not dramatically different from, e.g., the deep reinforcement-learning neural networks used by Google DeepMind to play Atari games (although the exact algorithms and level of complexity differ).

Huerta and Nowotny (2009) explain (p. 2123): "A foraging moth or bee can visit on the order of 100 flowers in a day. During these trips, the colors, shapes, textures, and odors of the flowers are associated with nectar rewards." This is not fundamentally different from an Atari-playing program learning that certain input pixels from the screen, combined with a given action, are associated with reward. Of course, a moth or bee has many additional cognitive abilities, and can perform many additional behaviors, as compared to the Atari-playing AI.

Robot rights

Because I see sentience as very gradual rather than binary, an approach that assumes binary rights (i.e., equal rights between humans and robots) seems problematic. For example, I think even present-day AI systems have trace amounts of sentience, but I think they don't matter enough compared with humans for it to make sense to grant them full-fledged rights.

Philosophically, I'm a utilitarian, which means I care more about happiness and suffering than about rights per se. As a result, I would incline more toward thinking about "machine welfare", in analogy with concern for "animal welfare". This framework is flexible with respect to degrees of sentience because you can give less sentient beings less weight when adding up their collective wellbeing.

I think it's still too early to push for any practical machine-welfare measures (other than more research on and exploration of the issue), because I think most present-day AI systems are less complex and less sentient than even insects are. The world has roughly a billion billion insects at any given moment, while the number of computers in the world is probably only in the billions, so machines seem unlikely to surpass biological life in cumulative suffering any time soon. However, I think this may change in the coming centuries as machine intelligence becomes more sophisticated and plentiful.

Future advocates for machine welfare can take inspiration from present-day advocates for animal welfare. For example, they could apply the Three Rs to more advanced forms of machine intelligence:

  1. Replace intelligent/emotional/suffering-prone machines with less intelligent ones
  2. Reduce the number of intelligent machines used for a task
  3. Refine use of intelligent machines by causing them to suffer less during their operation.

Unlike with animals, we can directly rewrite the "brains" of robots and other AIs, which may make it feasible to reduce their suffering if society has the motivation to push in that direction. It's not yet very clear what cognitive processes best fit the concepts of "pleasure" and "pain", but as scientists and philosophers better tease out how pleasure and pain work in animal and artificial brains, it may be possible to bias AI systems toward designs that involve less pain. For example, one aspect of suffering can be a high-level thought to oneself that "I'm in a situation I want to get out of, but I don't know how to get out of it, and I wish I weren't experiencing what I'm experiencing now." Perhaps there will be ways to reduce thoughts of this type on the part of advanced machines. As a simple and perhaps unrealistic example: if an intelligent robot of the future gets stuck somewhere and can't escape, then after trying for a few minutes to free itself, the robot could send for help and then shut off so that it won't "suffer" while waiting to be rescued.

It's possible that video games many decades from now will contain AI players who are much more intelligent than game AIs are today. This would be done to enhance the realism of the games. Moreover, video-game enemy characters are often put in situations where their preferences are frustrated. If such game AIs are eventually conscious enough to warrant serious moral concern, then there would be a case for reduction and replacement of such AIs with simpler or less suffering-prone versions, since unlike some other forms of AI, video games are purely luxuries. (Of course, in practice, the suffering of biological animals is likely to remain a vastly greater moral concern than the suffering of video-game characters for the foreseeable future.)

What about highly intelligent, human-level machines? Should we, for example, give them the right to vote? One consideration is that, unlike with humans and other animals, we design the preferences of the machines we build, so we should ideally give them the preference of fulfilling our preferences. If we pull that off successfully, then there may not be significant conflicts between the preferences of machines vs. the preferences of their owners, in which case giving machines a vote in addition to their owners may be superfluous. (This is a very simplistic argument, and no doubt there could be many counterexamples. I may also be assuming an overly anthropomorphic vision of human-level AI.) Lukas Gloor has discussed why giving AIs our desired goals is different from slavery.

In addition, if a machine is sufficiently smart, it may be able to earn money working in the economy, and insofar as money can buy power, these machines may gain significant political clout on their own, perhaps eventually leading them to fight for their own rights. We've already kind of seen this happen in the case of corporations, which are profit-maximizing robots (mostly running on the hardware of human brains) that buy political influence, and these robots have a degree of legal personhood.

A lesson here may be that it's most important for rights advocates to focus on the rights of the least powerful, since the more powerful may be able to fight for their own rights. In the machine case, there will always be many more simple, "dumb" machines than highly intelligent ones, and we should keep in mind that these simpler machines have some degree of moral importance even if they can't verbally plead with us to give them moral consideration. In a similar way, non-human animals are far more numerous than humans, and they're also less able than humans to fight for recognition of their own interests.