Should machines be capable of having moral status?

As science and technology progresses, intelligence may no longer be a distinctly human trait. The release of natural language processor GPT-3 by OpenAI in 2020 was a revolutionary breakthrough in the field of applied AI research. Resembling the neural network inside the human brain, GPT-3 is the largest AI model to date with more than 175 billion parameters. While this is still but a paltry fraction compared to the human brain which has about a hundred trillion parameters, early experiments with GPT-3 produce computer-generated language that is making headway on passing the turning test. We now have computers that can generate art, prose, literature, stories with minimal human input. If current trends in research continues, experts like Nick Bostrom foreshadow the coming of “Superintellgence” — “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom 2014). We as humanity must wrestle with the question: should we grant machines moral status?

What does it mean to have moral status? Bostrom and Yudkowsky (2014) borrow from Francis Kamm (2007) in their definition of moral status: “X has moral status = because X counts morally in its own right, it is permissible/ impermissible to do things to it for its own sake”. Bostrom and Yudkowsky draw on two principles to make the case for the moral status of machines (1) principle of substrate non-discrimination, (2) principle of ontogeny non-discrimination — the former posits that it does not matter what beings are made of, and the second posits that it does not matter how beings come to be, so long as if two beings have the same functionality and the same conscious experience, they should have the same moral status.

What is consciousness? Bostrom and Yudkowsky offer two qualities that relates to a conscious experience — (1) sentience, the capacity for phenomenal experience like the ability to feel pain, (2) sapience, capabilities around intelligence, self-awareness and reasoning. This line of inquiry is interesting because it allows us human to wrestle with concepts on what makes our humanity, our lived experience uniquely human, are these aspects of consciousness common to all intelligent being. At the same time, we do not know yet if consciousness as an empirical celling, if we can truly measure and objectively explain the physical or neural basis of consciousness. This “hard problem of consciousness” as put forth by Chalmers (1995) is one that we must overcome to reconcile consciousness with artificial intelligence.

I wonder why consciousness is a prerequisite to moral status. Could it be the case that consciousness gives us the necessary capacity to process morality instead? It is not the fact that a being is conscious therefore the being should be granted moral status, but rather, because the being is conscious and is able to process qualities like pain, self-reflection, empathy and understanding of other minds, that makes the being capable of expressing practicing qualities of morality such as respect, reciprocity, love.

I am not opposed to granting machines rights and expanding the concept of the universal declaration of human rights beyond human — just as the Copernican revolution shifted our earth centric model of the world to a heliocentric understanding, so will the AI revolution change our anthropocentric model of man to one that has a broader conception.

However, I question, why are we using an deontological framework of duty, obligation, rights, responsibilities based on an innate quality (consciousness) where the research is still inconclusive about the neurology of consciousness. Can we consider using a capability approach[1] — adapting from Sen and Nussbaum — that claims well-being should be understood in terms of people’s (or in our case, beings in general) capabilities and functioning, and that the freedom to achieve well-being is of primary moral importance. While Sen and Nussbaum writes from a developmental angle (how can we improve the overall development of poor societies by increasing their capabilities), there are parallels that we can draw to our discussion on machines and moral status.

In fact, a capabilities approach aligns nicely with the overall goals of the AI safety research community. As Bostrom (2002) noted, superintelligence is one of several “existential risks”: a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” The potential dystopian scenario of artificial general intelligence (AGI) choosing self-preservation of humanity, the unpredictability of such AGI — these are immense causes for concern. One wonders how do we shape the development of AI research such that it “empowers and not overpower humans”[2]? To date, we already have promising research such as the Affective Computing research at MIT Media Lab[3] that seeks to explore emotion recognitions among machines and enabling robots to respond with empathy, or the work by deepmind around aligning AI with human values. We need to build intelligence that has the capacity for moral reasoning — this will both push the frontier of AI research and also push the frontier of what co-existence with multiple intelligence could look like within civilization.

I concede that my argument around moral status and machine does not have a morally pure and satisfying resolution around the ethics of robot rights (this is definitely a hard problem that will continue to be explored as philosophy, ethics, neuroscience converge), nevertheless, I think a more instrumental, self-serving and self-persevering approach around AI research can go a longer way in engineering the kind of machine future we want to live in.

[1] https://plato.stanford.edu/entries/capability-approach/

[2] Max Tegmark’s Carr Center Lecture — “How to Get Empowered, not Overpowered, By Artificial Intelligence” https://www.youtube.com/watch?v=TnsG3uxXoH8

[3] https://www.media.mit.edu/groups/affective-computing/projects/

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dora Heng

Recovering economist passionate about global development and being human in an age of technological disruption