Mirror Neurons May Be Responsible For Global Warming & U.S. Economic Woes

Since their discovery in the 1990s, mirror neurons have experienced a degree of fanfare uncommon to findings in the field of neuroscience. Mirror neurons are so named because they are activated both when a primate participates in a task, and while watching another complete the same task, thus “mirroring” the behavior of the other animal. This unique activation pattern has led some to suggest that mirror neurons are integral not only to imitation, but also to understanding that others have their own mental states (theory of mind). By extension, it has been hypothesized that mirror neurons are necessary for language acquisition and social interaction. Dysfunctions in mirror neurons have even been offered as a possible cause of autism.

Thus, they have come to be viewed as a very special kind of neuron, with a versatility and importance to brain function that is unrivaled by other types of brain cells. But, do mirror neurons deserve the exalted status that some have ascribed to them? In short: probably not.

Mirror neurons do seem to play an interesting role in cognition. Primate studies have found mirror neurons to be activated in correlation with focusing on a particular goal or intention of movement. They are also activated in a selective fashion, with specific groups corresponding to different goals of an action, e.g. grasping to move vs. grasping to eat. Of additional interest, they have been found to respond to sounds associated with an observed action.

fMRI studies in humans have revealed specific activity in areas where mirror neurons are thought to be located, such as the ventral premotor (vPM) and anterior intraparietal sulcus (alPS), during observation and imitation of movement.

But, while these findings in humans and non-human primates are intriguing, they don’t support the rampant speculation that has followed about the role of mirror neurons in overall cognitive function. Experiments with monkeys to date haven’t assessed the ability to imitate, experience empathy, display theory of mind, or use language. Of course, it is debatable to what extent some of these attributes even exist in non-human primates, or if they can be studied if they do.

As for humans, neuroimaging experiments have allowed scientists to determine which regions of the brain are active during imitation or observation of an action. The specific neurons that are utilized, however, and any physiological characteristics that make them unique, cannot be assessed with current imaging technology.

Thus, the roles attributed to mirror neurons in the last decade since their discovery may have been an overly ambitious attempt to describe their function. By extension, implying that their malfunction is critical in autism could really be jumping the gun.

In an essay in last week’s Nature, Antonio Damasio and Kaspar Meyer discuss the exaggerated claims about mirror neurons, and suggest a rational hypothesis for how they may work. Twenty years ago Damasio proposed a theory known as “time-locked multimodal activation” to explain the development of complex memories. The theory is based on the proposed existence of groups of neurons that, during the encoding of memories, receive input from a number of different sites. Damasio termed these neuronal groups convergence-divergence zones (CDZ). He suggested there are two types of CDZs: local CDZs, which collect information from areas close to a sensory cortex (e.g. the visual cortex), and non-local CDZs, which are higher-order structures of the brain where the information from local CDZs converges.

According to this theory, when a memory is formed—Damasio and Meyer use the example of a monkey opening a peanut shell—all the information about the event converges on a non-local CDZ. Then, if the monkey hears a peanut shell opened in the future, this would activate a local auditory CDZ, as well as the non-local CDZ where memories associated with the noise are stored. Signals are sent out from the non-local CDZ to all local CDZs that were involved in the original experience of the event, activating these sites and resulting in a sort of recreation of the original peanut-cracking.

Mirror neurons are represented by the non-local CDZs. In this scenario, however, mirror neurons are not physiologically unique. They are normal neurons involved in a network that has less to do with “mirroring” than with integrating and syncing the various aspects of elaborate memories. This does not take away from the role and function of this network, but should detract a little from the aggrandized status attributed to individual mirror neurons, in favor of an appreciation of the holistic complexity of the brain.

The CDZ hypothesis, however, has not been tested, although research does indicate that networks involved in observing and imitating behavior spread beyond purported mirror neuron sites. Regardless of whether the specifics of the CDZ hypothesis come to be supported by future studies, I feel it represents a more sensible approach to mirror neurons. To credit mirror neurons alone with a function that carries such importance, like the ability to infer the mental states of others, seems to oppose much of what has been learned thus far about neuroscience. We have never found language neurons, love neurons, or fear neurons. Instead we have found networks that spread throughout brain regions that correlate with the ability to experience these aspects of cognition. I suspect we will soon say the same about mirror neuron networks and their involvement in social interaction.

 

Damasio, A., Meyer, K. (2008). Behind the looking-glass. Nature, 454 (7201), 167-168. DOI:10.1038/454167a

Dinstein, I., Thomas, C., Behrmann, M., & Heeger, D.J. (2008). A mirror up to nature. Current Biology, 18 (1), 13-17.

If I Beat Up a Robot, Will I Feel Remorse?

At times, when my computer's performance has transformed it from an essential tool into a source of frustration, I will find myself getting increasingly angry at it. Eventually I may begin cursing it, roughly shoving the keyboard around, violently pressing the reset button, etc. And I can’t help noticing that, during these moments of anger, I have actually begun to blame my computer for the way it is working—as if there were a homunculus inside the machine who had decided that it was a good time to frustrate me and then started fiddling with the wires and circuitry.

I’m sure I’m not alone. Human beings have a general tendency to attribute mental states, or intentionality, to inanimate objects. There could be several reasons for this attribution, known as mentalizing, being a general human strategy that we overuse and mistakenly apply to nonliving things. One is that our knowledge of human behavior is more richly developed than other types of knowledge, due to the early age at which we acquire it and the large role it plays in our lives. Thus, perhaps we are predisposed to turn to this knowledge to interpret actions of any kind, sometimes causing us to anthropomorphize when examining non-human actions.

Another reason may be that assigning intentionality to an action in our environment is the safest and quickest way to interpret it. For example, if one is walking in tall grass and the grass a few feet ahead of them begins rustling, it would be more adaptive to think there is a predator behind that movement than to assume it is just the wind. Someone who decides it is the wind may end up being wrong, and getting killed or injured. Someone who assigns intention to it may also be wrong, but in either scenario has a better chance of being safe because their erroneous conclusion probably resulted in them using a defensive or evasive, instead of a nonchalant, strategy.

One more possible reason for our overuse of Theory of Mind (the understanding that others have their own mental states) may be based on our need for social interaction. Studies have indicated that people who feel socially isolated tend to anthropomorphize to a greater extent. Thus, perhaps part of the reason we assign intentionality so readily is that we have a desire for other intentional agents to be present in our environment, so we can interact with them.

A study published this month in PloS ONE further explores our behavioral and neural responses when we interact with humans and machines that vary in their resemblance to humans. Twenty subjects participated in the study and engaged in a game called prisoner’s dilemma (PD), a contest that has been extensively used in studying social interaction, competition, and cooperation.

PD is so called because it is based on a hypothetical scenario where two men are arrested for involvement in the same crime. The police approach each individual separately and offer him a deal in which he would have to betray the other. The prisoners are faced with the decision to remain silent or to betray their partner. If both stay silent, they face a very minor sentence because the police don’t have enough evidence to make the greater charge stick. If both betray, however, they each face a 10-year sentence. If one betrays and the other remains silent, the betrayer goes free and the silent accomplice receives the full sentence. The game is usually modified to involve repeated situations of cooperate or betray, in which the players can base their decision on the actions of their opponent in the previous round.

In the PloS ONE study, participants played PD against a computer partner (CP) (just a commercial laptop set up across the room from them), a functional robot (FR) consisting of two button-pressing mechanisms with no human form, an anthropomorphic robot (AR) with a human-like shape, hands, and face, and a human partner (HP). Unbeknownst to the participants, the form of their opponent did not have any relationship to the responses given. All were random.

partnerss.png

The participants’ impression of their partners was gauged after the experiment with a questionnaire. The survey measured how much fun the participants’ reported when playing against each partner, as well as how intelligent and competitive they felt each player to be. Participants indicated that they enjoyed the interactions more the more human-like their partner was. They also rated the partners progressively more intelligent from the least human (CP) up to the most human (HP). They judged that the AR was more competitive than its less-human counterparts, despite the fact that its responses were randomly generated, just as the others were.

The brain activity of the participants during their interactions was also measured using fMRI. Previous research has indicated that mentalizing involves at least two brain regions: the right posterior superior temporal sulcus (pSTS) at the temporo-parietal junction (TPJ), and the medial prefrontal cortex (mPFC). In the present study, these regions were activated during every interaction, but activity increased linearly as the partners became more human-like.

These results indicate that the more a machine resembles a human, the more we may treat it as if it has its own mental state. This doesn’t seem to be surprising, but I guess what intrigued me more about the study was that there was activity in the mentalizing areas of the brain even during the interaction with the CP, as compared to controls. The activity also increased significantly with each new partner, even when the increase in human likeness was minimal (see picture of the partners above). These examples are evidence of our proclivity to mentalize, as even a slight indication of responsiveness by an object in our environment makes us more inclined to treat it as a conscious entity.

The authors of the study point out that these results may be even more significant when robots become a larger part of our lives. If the frustration I experience with my computer is any indication, I foresee human on robot violence being an epidemic by the year 2050.

 

Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., Kircher, T., Robertson, E. (2008). Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI. PLoS ONE, 3 (7), e2597. DOI:10.1371/journal.pone.0002597

Daisy, Daisy, Give Me Your Answer Do

Even the most successful attempts at artificial intelligence (AI) always seem to lack certain essential qualities of a living brain. It is a formidable task to create a robotic or computerized simulation of a human that seems to display original desires or beliefs, or one that truly understands the desires and beliefs of others in the way people can. This latter ability, often referred to as “theory of mind”, is considered an integral aspect of being human, and the extent to which it has developed in us may be one thing that sets us apart from other animals. Reproducing theory of mind in AI is difficult, but a semblance of it has been demonstrated before with physical robots (click here for an example). Until now, however, it has never been recreated in computer generated characters.

A group of researchers at Rensselaer Polytechnic Institute (RPI) have developed a character in the popular computer game Second Life who uses reasoning to determine what another character in the game is thinking. The character was created with a logic-based programming RPI calls RASCALS (Rensselaer Advanced Synthetic Character Architecture for “Living” Systems). The program involves several levels of cognition, simple systems for low and mid-level cognition (like perception and movement), and advanced logical systems for abstract thought. The group believes they can eventually use RASCALS to create characters in Second Life that possess all the qualities of a real person, such as the capacity to lie, believe, remember, or be manipulative.

Second Life is a life-simulating game, similar in some ways to the popular game The Sims. Unlike the The Sims, however, Second Life involves a virtual universe (metaverse) where people can interact with one another in real-time through avatars they create for use in the game.

The character created by the group at RPI, Edd, appears to have reasoning abilities equivalent to those of about a four-year old child. To test these abilities, Edd was placed in a situation with two other characters (we’ll call them John and Mike). Mike places a gun in briefcase A in full sight of John and Edd. He then asks John to leave the room. Once he is gone, Mike moves the gun to case B, then calls John back. Mike asks Edd which case John will look in for the gun.

Does this sound familiar? It's an actual psychological test developed in the 1980s, originally known as the Sally-Anne test. The Sally-Anne test plays out the same scenario described above, only with dolls and a marble or ball (since its inception the test has been done with human actors as well). A child watches the Anne doll take a marble from Sally’s basket and put it in her box while Sally is not in the room. If the child, after watching the interaction, can guess when Sally returns that she will look in her basket for the marble, it demonstrates he or she has begun to form theory of mind. The child is able to understand that other people have thoughts and beliefs different from his or her own. They realize that when Sally re-enters the room she is unaware the marble has changed positions, so she will look in the spot where the marble originally was. The ability to make these types of attributions of belief usually develops at around age three to four in children.

Edd, the character from Second Life, is able to do the same. When Mike asks him in which case John will look for the gun, he will say case A—the case John saw the gun placed in (for the demonstration click here). And Edd is not programmed specifically to make this choice. Instead he “learns” from past mistakes that, if John cannot see the gun being moved he will not know it is in the other briefcase.

The research group at RPI see Edd as a first step in the creation of avatars on Second Life that can interact with humans in a manner unlike that of any simulated characters before, being able to understand and predict the actions of others, and act virtually autonomously. They see potential benefits of this technology in education and defense, as well as entertainment. IBM, a supporter of the research, has visions of creating holographic characters for games like Second Life, which could interact with humans directly.

This is all pretty amazing stuff, but for some reason HAL singing “Daisy Bell” keeps eerily replaying in my head as I write it.