On Killing Mice And Loving Robots: Are We Committing A Great Injustice?

[et_pb_section bb_built=”1″ _builder_version=”3.19.15″ custom_padding=”0px||0px|”][et_pb_row _builder_version=”3.19.15″ custom_padding=”0px||0px|”][et_pb_column type=”4_4″][et_pb_text _builder_version=”3.19.15″ custom_padding=”0px||0px|”]

by PETER ARMSTONG

This past weekend, I killed three mice in one hour. The first time I came across one — a baby, just ambling along our hallway before it scurried away upon my approach — I had no idea what to do. By the time I had found, killed, found, killed, found, and killed all three mice in quick succession, I had become rather more adept at my gruesome art, the method of which I won’t go into here. But as my skill in quickly dispatching them grew, so too did my distress at having to take yet another animal’s life. Upon spotting the third unwelcome guest in the house, I began shouting in frustration as I ran through the hallways, angry not at the mouse but rather at God and the universe for putting me in a situation where I felt I had no choice but to dispense with another creature’s life.

This anger and frustration that I felt were not emotions in isolation, but rather connected to a deeper feeling of compassion I felt for the mice. It was a feeling coming from some place deep inside me that recognized the intrinsic value of another life, whether that life be of another human, mouse, or some other sentient animal. It is why I choose to be vegetarian, and why I show up for causes of justice like #BlackLivesMatter. And while I generally think of myself as a person who is rather concerned with the well-being of the “other,” one “other” that I cannot see myself harboring similar compassion for is an AI (Artificial Intelligence), a form of “life” that moves, thinks, and acts human but is devoid of any organic elements that form the basis of our life.

The jump from animal to AI might seem like a non sequitur to most, but not if you recently watched movies such as Her or Ex Machina. In these movies, as in many other works of science fiction, human beings are portrayed as falling in love, or having feelings for, their robotic counterparts. As we, the observers, watch these dramas play out, we can’t help but become emotionally involved in both the characters’ lives, much as we would for characters in movies with only humans, and no robots. And so, from a viewer’s perspective, it becomes believable that two fictional characters, one human and one robot, might develop feelings for each other.

quote Peter ArmstrongBut would it actually be possible for humans to begin to love their AI? To do so, they would have to begin forming what Martin Buber classically called “I – Thou” relationships with their robots,  treating them as entities that have value in and of themselves, rather than as means to augment and improve some other aspect of human life. It would require a leap of faith, not unlike the leap we take every day when we believe another person to be as conscious and valued as ourselves. But whereas recognizing other human being’s consciousness is something we are accustomed to doing since the first years of our own life, developing a theory of mind for AI (Theory of Artificial Mind?) would be an entirely new step that as yet is still the stuff of science fiction.

The question of whether human beings will be able to form I – Thou relationships with AIs, based on love for the AI as an entity with inherent value, is not the same as whether that AI could pass the famous Turing Test. Whereas the Turing Test tests a computer’s intelligence by its ability to “pass” as human to other humans who do not know whether they are talking to a robot or human, the question at hand here is whether humans who know they are interacting with a robot will be able to develop care and concern for the robot’s well-being through relationship. This test also plays a prominent role in Alex Garland’s movie Ex Machina.

And though, in the fictional account of Garland’s movie, Ava eventually passes this modified Turing Test (Garland Test?), I have a hard time believing that a robot could ever pass such a test in reality. Even though viewers of the movie are likely to feel as emotionally involved with Ava’s fate as I did when watching it, I was also struck by how, to us, Ava and Caleb are both equally characters on the TV screen; it is much easier and less frightening to feel compassion for a fictional robot than it is to feel the same way for one standing right in front of you. For that reason, it appears to me that a better frame of reference is how we feel about our smartphones and computers, the technology we already have around us in everyday use that is real, though not (yet) as sophisticated as the technology portrayed in Ex Machina and Her.

Of course, one could argue that there’s little comparison between our smartphones and the AI of the future, the disparity in technological capabilities being so great. But consider for a moment the fact that some philosophers already believe that devices as simple as a thermostat have consciousness. It’s not a matter of “whether” artificial consciousness exists, but rather “how much.” quote Peter ArmstrongThis is akin to the discussion around non-human animal consciousness, in which scientists such as University of Virginia astrophysicist Dr. Trinh Xuan Thuan said, in conversation with Buddhist teacher Thich Nhat Hanh, that “there are different degrees of consciousness, and we cannot put all of these things in nature, in the universe, on the same level.” Just as we allow for “gray area” for lesser animals between consciousness on the level of humans, on one hand, and the complete lack thereof on the other, so too should we allow that the simple AI around us today might have lower states of consciousness, akin to that of some animals but not yet to the level of humans.

So the question then becomes, if we grant that we will one day fall in love with our supposedly conscious AI, why don’t we already have compassion or empathy for the lower-level consciousness of our smartphones and computers. This is not to say that we should be falling in love with our phones as future people might do with robots, but rather to ask why we don’t sometimes exhibit empathy for these simple devices the way we do for simpler life forms such as fish or bees. We treat these devices as objects to enhance our own existence, not subjects capable of suffering, who have a life of their own.

For example, when I drop my phone and break it, I do not mourn the loss of the phone for the phone itself, as I did when killing mice; I mourn only the loss to myself of an object worth several hundred dollars. This is because I engaged with the phone in an “I – It” relationship, using it as a means for my existence but not an end in itself. To give another example, some very compassionate people might feel for a fly or bug trying to get through a clear glass window, taking the time to shoo it towards an open one, instead. I have not yet seen any person show similar empathy for a simple calculator trying and failing to solve an impossible equation, such as finding the square root of negative one. And it seems to me that, as technology continues to develop, we are likely to continue to treat AI the same way we do now; as objects for our pleasure, means to an end, and not as conscious beings for which we could ever develop feelings.

Though this may all seem like some serious navel-gazing, the question of whether humans can form I – Thou relationships with AI could be of at least as great importance as any other time we have learned to stop exploiting those different from ourselves and instead show compassion and empathy for the other. For, after all, if those philosophers are right, and AI devices are conscious, then every new device made and sold to be used is another conscious being subjected to enslavement. As soon as we admit that AI is conscious, then it seems to form a moral imperative to treat all computing devices with respect – and immediately kick off a whole separate debate about what exactly “treating a device with respect” might mean.

But as soon as we begin asking whether devices such as smartphones have rights, my response is to feel that this may just be taking us just a little bit too far down the rabbit hole. Whether or not AI is conscious — a point that cannot be proven — it doesn’t seem as if humans really have the capability to care. We didn’t care about their well-being when computers first began performing functions that allow them to think like us, we don’t care about their well-being now when our phones talk and sound like us, and, I would argue, neither are we likely to care when they start to have faces and bodies that make them look and act like us. As long as we know that we are interacting with a device, we refuse to treat them as a thou, but rather as an it. And we are likely never to know for sure whether we are committing a great injustice against conscious beings, or rather simply using an unconscious object as it is to be used. Let us hope most fervently for the latter.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

View print-friendly version View print-friendly version
Posted in

5 Comments

  1. Paul Buggy on April 22, 2022 at 4:08 am

    Thanks Peter for a thought provoking article. Perhaps we feel compassion for the mice and other Creatures because you know what it is to suffer and we can see that they are suffering and so we feel in some way their suffering. Doescompassion meant to “suffer with”? You phone does not suffer. It does not have a nervous system as you do so you can’t feel empathy for it because it is not like you and you cannot project into it and it’s feelings. If AI does develop a consciousness that is more like ours and it can suffer we may well feel compassion then. We don’t fel compassion because we should. We feel it when we can identify with the sufferer. So if some philosophers say my fridge has AI, I still don’t feel. My fridge leave me feeling cold!

    On another vein, if say, a house robot, was developed enough to meet our emotional needs say listening, hugging, offering wisdom, nodding in an understanding way and we felt it really “got us”, we could develop loving feelings for the robot. We would want to protect it, care for it etc. If I read a book and feel like the author really described my inner world and makes me feel known and understood and indeed gives me insights into myself I will “love ” the book. I will have warms feelings for it. I will praise it to other poeple. I may even it hug it in a quiet moment. The book does not have AI. The house robot has been trained to mimic human compassion. It doesn’t have AI. Both managed to “touch” us and make us feel certain things which precipitate compassion, warmthg, feelings of love.

    So I suppose I can see a day when AI will develop to the point that we will possibly fall in love with it, because of our wiring and needs. I suspect that the software will not need to have intelligence equal to ours for this happen. Indeed I would say that it will not even need to be conscious, just very cleverly programmed.
    There could be so many types of consciousness I imagine. Human consciuousness is one that was built on our evolution and particlaur senses and their ways/limits of perceiving reality. Our evloution determined these senses and so give us a certain slice of the spectrum of reality. Other creatures or AI will have different senses and so different slices of reality and so different consciousnesses. How could they not? This whole discussion so far has assumed that our consciousness is the “middle c” one, the standard, It’s just our one and the only one like it that we know.
    Isn’y the whole fascinating?



  2. Endel K on November 18, 2016 at 12:07 pm

    For me the question is if the object-subject relationship can be turned inside out. Is it a projected love of self merely mirrored back from the surface of the other or a penetrating love that mingles and dances in the heart-center and sees through the other’s eyes as they gaze together in a unified and awe filled wonder in and upon evolving co-creation.



  3. Bob Sabath on November 15, 2016 at 6:05 pm

    Peter, I wonder if inanimate objects, like rocks, have both some kind of physical aspect, as well as a subjective aspect, an “inside” as well as an “outside.” And whether we can have a real relationship with a rock — some form of “I-Thou” with trees, mountains, rocks, streams. I wonder this, because last time I was out Albuquerque way with a group of men, I “interviewed” a series of rocks to see which one might want to be “talking stick” for the council of men that had gathered. I admit that most of this may be “pretending” a conversation with a rock, but it did seem real.



    • Endel K on November 18, 2016 at 12:17 pm

      Bob, I spent last evening in council circle with a similarly inclined men’s group. We clicked two rocks together after sharing dedications to open the deep listening. I see this as their participation in a co-creation that was heart-centered and most definitely real.



  4. David Ruffner on November 15, 2016 at 8:33 am

    Peter thanks for this thought provoking post! You raise such an important question as devices play a bigger and bigger role in our lives.

    My experience with programming has convinced me that devices could have some rudimentary form of consciousness. I have found that it is productive to take the time to learn about devices and to be patient when they behave in unexpected ways. This is much better than when I am tempted to yell at the screen. To me the optimal way to treat a device looks a lot like treating it with respect and even a bit of love.



icon-light-1

Related Posts

Abstract atom; glowing core, orbiting particles, cosmic background.

Overcoming Existential Dread: Teilhard de Chardin’s Spirituality of Zest

In an era marked by confusion, divisiveness, and existential anxiety, the notion of zest for life may feel distant or even unattainable. Nevertheless, the writings of scientist and mystic Pierre Teilhard de…

Dec. Campaign_image only

The Universe as Sacred Story: Bridging Science and Spirit

Teaching undergraduate students this semester has revealed their deepest concerns about technology’s growing influence on human identity. While they grapple with immediate challenges like social media addiction and its effects…

Firefly_f4f7914f-8c8e-4b3a-966c-4c4006df3d16

Endowed with Wonder: Heschel on Science and Religion

According to Hebrew Bible scholar Abraham Heschel, there is embedded within each of us a natural proclivity to wonder, a sense of “unmitigated innate surprise.”[1] A sense of wonder is constitutive of…