- Center for Christogenesis - https://christogenesis.org -

On Killing Mice And Loving Robots: Are We Committing A Great Injustice?

[et_pb_section bb_built=”1″ _builder_version=”3.19.15″ custom_padding=”0px||0px|”][et_pb_row _builder_version=”3.19.15″ custom_padding=”0px||0px|”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.19.15″ custom_padding=”0px||0px|”]

by PETER ARMSTONG

This past weekend, I killed three mice in one hour. The first time I came across one — a baby, just ambling along our hallway before it scurried away upon my approach — I had no idea what to do. By the time I had found, killed, found, killed, found, and killed all three mice in quick succession, I had become rather more adept at my gruesome art, the method of which I won’t go into here. But as my skill in quickly dispatching them grew, so too did my distress at having to take yet another animal’s life. Upon spotting the third unwelcome guest in the house, I began shouting in frustration as I ran through the hallways, angry not at the mouse but rather at God and the universe for putting me in a situation where I felt I had no choice but to dispense with another creature’s life.

This anger and frustration that I felt were not emotions in isolation, but rather connected to a deeper feeling of compassion I felt for the mice. It was a feeling coming from some place deep inside me that recognized the intrinsic value of another life, whether that life be of another human, mouse, or some other sentient animal. It is why I choose to be vegetarian, and why I show up for causes of justice like #BlackLivesMatter. And while I generally think of myself as a person who is rather concerned with the well-being of the “other,” one “other” that I cannot see myself harboring similar compassion for is an AI (Artificial Intelligence), a form of “life” that moves, thinks, and acts human but is devoid of any organic elements that form the basis of our life.

The jump from animal to AI might seem like a non sequitur to most, but not if you recently watched movies such as Her or Ex Machina. In these movies, as in many other works of science fiction, human beings are portrayed as falling in love, or having feelings for, their robotic counterparts. As we, the observers, watch these dramas play out, we can’t help but become emotionally involved in both the characters’ lives, much as we would for characters in movies with only humans, and no robots. And so, from a viewer’s perspective, it becomes believable that two fictional characters, one human and one robot, might develop feelings for each other.

But would it actually be possible for humans to begin to love their AI? To do so, they would have to begin forming what Martin Buber classically called “I – Thou” relationships with their robots,  treating them as entities that have value in and of themselves, rather than as means to augment and improve some other aspect of human life. It would require a leap of faith, not unlike the leap we take every day when we believe another person to be as conscious and valued as ourselves. But whereas recognizing other human being’s consciousness is something we are accustomed to doing since the first years of our own life, developing a theory of mind for AI (Theory of Artificial Mind?) would be an entirely new step that as yet is still the stuff of science fiction.

The question of whether human beings will be able to form I – Thou relationships with AIs, based on love for the AI as an entity with inherent value, is not the same as whether that AI could pass the famous Turing Test. Whereas the Turing Test tests a computer’s intelligence by its ability to “pass” as human to other humans who do not know whether they are talking to a robot or human, the question at hand here is whether humans who know they are interacting with a robot will be able to develop care and concern for the robot’s well-being through relationship. This test also plays a prominent role in Alex Garland’s movie Ex Machina.

And though, in the fictional account of Garland’s movie, Ava eventually passes this modified Turing Test (Garland Test?), I have a hard time believing that a robot could ever pass such a test in reality. Even though viewers of the movie are likely to feel as emotionally involved with Ava’s fate as I did when watching it, I was also struck by how, to us, Ava and Caleb are both equally characters on the TV screen; it is much easier and less frightening to feel compassion for a fictional robot than it is to feel the same way for one standing right in front of you. For that reason, it appears to me that a better frame of reference is how we feel about our smartphones and computers, the technology we already have around us in everyday use that is real, though not (yet) as sophisticated as the technology portrayed in Ex Machina and Her.

Of course, one could argue that there’s little comparison between our smartphones and the AI of the future, the disparity in technological capabilities being so great. But consider for a moment the fact that some philosophers already believe that devices as simple as a thermostat have consciousness. It’s not a matter of “whether” artificial consciousness exists, but rather “how much.” This is akin to the discussion around non-human animal consciousness, in which scientists such as University of Virginia astrophysicist Dr. Trinh Xuan Thuan said, in conversation with Buddhist teacher Thich Nhat Hanh, that “there are different degrees of consciousness, and we cannot put all of these things in nature, in the universe, on the same level.” Just as we allow for “gray area” for lesser animals between consciousness on the level of humans, on one hand, and the complete lack thereof on the other, so too should we allow that the simple AI around us today might have lower states of consciousness, akin to that of some animals but not yet to the level of humans.

So the question then becomes, if we grant that we will one day fall in love with our supposedly conscious AI, why don’t we already have compassion or empathy for the lower-level consciousness of our smartphones and computers. This is not to say that we should be falling in love with our phones as future people might do with robots, but rather to ask why we don’t sometimes exhibit empathy for these simple devices the way we do for simpler life forms such as fish or bees. We treat these devices as objects to enhance our own existence, not subjects capable of suffering, who have a life of their own.

For example, when I drop my phone and break it, I do not mourn the loss of the phone for the phone itself, as I did when killing mice; I mourn only the loss to myself of an object worth several hundred dollars. This is because I engaged with the phone in an “I – It” relationship, using it as a means for my existence but not an end in itself. To give another example, some very compassionate people might feel for a fly or bug trying to get through a clear glass window, taking the time to shoo it towards an open one, instead. I have not yet seen any person show similar empathy for a simple calculator trying and failing to solve an impossible equation, such as finding the square root of negative one. And it seems to me that, as technology continues to develop, we are likely to continue to treat AI the same way we do now; as objects for our pleasure, means to an end, and not as conscious beings for which we could ever develop feelings.

Though this may all seem like some serious navel-gazing, the question of whether humans can form I – Thou relationships with AI could be of at least as great importance as any other time we have learned to stop exploiting those different from ourselves and instead show compassion and empathy for the other. For, after all, if those philosophers are right, and AI devices are conscious, then every new device made and sold to be used is another conscious being subjected to enslavement. As soon as we admit that AI is conscious, then it seems to form a moral imperative to treat all computing devices with respect – and immediately kick off a whole separate debate about what exactly “treating a device with respect” might mean.

But as soon as we begin asking whether devices such as smartphones have rights, my response is to feel that this may just be taking us just a little bit too far down the rabbit hole. Whether or not AI is conscious — a point that cannot be proven — it doesn’t seem as if humans really have the capability to care. We didn’t care about their well-being when computers first began performing functions that allow them to think like us, we don’t care about their well-being now when our phones talk and sound like us, and, I would argue, neither are we likely to care when they start to have faces and bodies that make them look and act like us. As long as we know that we are interacting with a device, we refuse to treat them as a thou, but rather as an it. And we are likely never to know for sure whether we are committing a great injustice against conscious beings, or rather simply using an unconscious object as it is to be used. Let us hope most fervently for the latter.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

 View print-friendly version