If I Beat Up a Robot, Will I Feel Remorse?
At times, when my computer's performance has transformed it from an essential tool into a source of frustration, I will find myself getting increasingly angry at it. Eventually I may begin cursing it, roughly shoving the keyboard around, violently pressing the reset button, etc. And I can’t help noticing that, during these moments of anger, I have actually begun to blame my computer for the way it is working—as if there were a homunculus inside the machine who had decided that it was a good time to frustrate me and then started fiddling with the wires and circuitry.
I’m sure I’m not alone. Human beings have a general tendency to attribute mental states, or intentionality, to inanimate objects. There could be several reasons for this attribution, known as mentalizing, being a general human strategy that we overuse and mistakenly apply to nonliving things. One is that our knowledge of human behavior is more richly developed than other types of knowledge, due to the early age at which we acquire it and the large role it plays in our lives. Thus, perhaps we are predisposed to turn to this knowledge to interpret actions of any kind, sometimes causing us to anthropomorphize when examining non-human actions.
Another reason may be that assigning intentionality to an action in our environment is the safest and quickest way to interpret it. For example, if one is walking in tall grass and the grass a few feet ahead of them begins rustling, it would be more adaptive to think there is a predator behind that movement than to assume it is just the wind. Someone who decides it is the wind may end up being wrong, and getting killed or injured. Someone who assigns intention to it may also be wrong, but in either scenario has a better chance of being safe because their erroneous conclusion probably resulted in them using a defensive or evasive, instead of a nonchalant, strategy.
One more possible reason for our overuse of Theory of Mind (the understanding that others have their own mental states) may be based on our need for social interaction. Studies have indicated that people who feel socially isolated tend to anthropomorphize to a greater extent. Thus, perhaps part of the reason we assign intentionality so readily is that we have a desire for other intentional agents to be present in our environment, so we can interact with them.
A study published this month in PloS ONE further explores our behavioral and neural responses when we interact with humans and machines that vary in their resemblance to humans. Twenty subjects participated in the study and engaged in a game called prisoner’s dilemma (PD), a contest that has been extensively used in studying social interaction, competition, and cooperation.
PD is so called because it is based on a hypothetical scenario where two men are arrested for involvement in the same crime. The police approach each individual separately and offer him a deal in which he would have to betray the other. The prisoners are faced with the decision to remain silent or to betray their partner. If both stay silent, they face a very minor sentence because the police don’t have enough evidence to make the greater charge stick. If both betray, however, they each face a 10-year sentence. If one betrays and the other remains silent, the betrayer goes free and the silent accomplice receives the full sentence. The game is usually modified to involve repeated situations of cooperate or betray, in which the players can base their decision on the actions of their opponent in the previous round.
In the PloS ONE study, participants played PD against a computer partner (CP) (just a commercial laptop set up across the room from them), a functional robot (FR) consisting of two button-pressing mechanisms with no human form, an anthropomorphic robot (AR) with a human-like shape, hands, and face, and a human partner (HP). Unbeknownst to the participants, the form of their opponent did not have any relationship to the responses given. All were random.
The participants’ impression of their partners was gauged after the experiment with a questionnaire. The survey measured how much fun the participants’ reported when playing against each partner, as well as how intelligent and competitive they felt each player to be. Participants indicated that they enjoyed the interactions more the more human-like their partner was. They also rated the partners progressively more intelligent from the least human (CP) up to the most human (HP). They judged that the AR was more competitive than its less-human counterparts, despite the fact that its responses were randomly generated, just as the others were.
The brain activity of the participants during their interactions was also measured using fMRI. Previous research has indicated that mentalizing involves at least two brain regions: the right posterior superior temporal sulcus (pSTS) at the temporo-parietal junction (TPJ), and the medial prefrontal cortex (mPFC). In the present study, these regions were activated during every interaction, but activity increased linearly as the partners became more human-like.
These results indicate that the more a machine resembles a human, the more we may treat it as if it has its own mental state. This doesn’t seem to be surprising, but I guess what intrigued me more about the study was that there was activity in the mentalizing areas of the brain even during the interaction with the CP, as compared to controls. The activity also increased significantly with each new partner, even when the increase in human likeness was minimal (see picture of the partners above). These examples are evidence of our proclivity to mentalize, as even a slight indication of responsiveness by an object in our environment makes us more inclined to treat it as a conscious entity.
The authors of the study point out that these results may be even more significant when robots become a larger part of our lives. If the frustration I experience with my computer is any indication, I foresee human on robot violence being an epidemic by the year 2050.
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T., Robertson, E. (2008). Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE, 3 (7), e2597. DOI:10.1371/journal.pone.0002597