What are we getting wrong in neuroscience?
August 20, 2016
In 1935, an ambitious neurology professor named Egas Moniz sat in the audience at a symposium on the frontal lobes, enthralled by neuroscientist Carlyle F. Jacobsen's description of some experiments Jacobsen had conducted with fellow investigator John Fulton. Jacobsen and Fulton had damaged the frontal lobes of a chimpanzee named "Becky," and afterwards they had observed a considerable behavioral transformation. Becky had previously been stubborn, erratic, and difficult to train, but post-operation she became placid, imperturbable, and compliant.
Moniz had already been thinking about the potential therapeutic value of frontal lobe surgery in humans after reading some papers about frontal lobe tumors and how they affected personality. He believed that some mental disorders were caused by static abnormalities in frontal lobe circuitry. By removing a portion of the frontal lobes, he hypothesized he would also be removing neurons and pathways that were problematic, in the process alleviating the patient's symptoms. Although Moniz had been pondering this possibility, Jacobsen's description of the changes seen in Becky was the impetus Moniz needed to try a similar approach with humans. He did so just three months after seeing Jacobsen's presentation, and the surgical procedure that would come to be known as the frontal lobotomy was born.
Moniz's procedure initially involved drilling two holes in a patient's skull, then injecting pure alcohol subcortically into the frontal lobes, with the hopes of destroying the regions where the mental disorder resided. Moniz soon turned to another tool for ablation, however---a steel loop he called a leucotome (which is Greek for "white matter knife")---and began calling the procedure a prefrontal leucotomy. Although his means of assessing the effectiveness of the procedure were inadequate by today's standards---for example, he generally only monitored patients for a few days after the surgery---Moniz reported recovery or improvement in most of the patients who underwent the procedure. Soon, prefrontal leucotomies were being done in a number of countries throughout the world.
The operation attracted the interest of neurologist Walter Freeman and neurosurgeon James Watts. They modified the procedure again, this time to involve entering the skull from the side using a large spatula. Once inside the cranium, the spatula was wiggled up and down in the hopes of severing connections between the thalamus and prefrontal cortex (based on the hypothesis that these connections were crucial for emotional responses, and could precipitate a disorder when not functioning properly). They also renamed the procedure "frontal lobotomy," as leucotomy implied only white matter was being removed and that was not the case with their method.
Several years later (in 1946), Freeman made one final modification to the procedure. He advocated for using the eye socket as an entry point to the frontal lobes (again to sever the connections between the thalamus and frontal areas). As his tool to do the ablation, he chose an ice pick. The ice pick was inserted through the eye socket, wiggled around to do the cutting, and then removed. The procedure could be done in 10 minutes; the development of this new "transorbital lobotomy" brought about the real heyday of lobotomy.
The introduction of transorbital lobotomy led to a significant increase in the popularity of the operation---perhaps due to the ease and expediency of the procedure. Between 1949 and 1952, somewhere around 5,000 lobotomies were conducted each year in the United States (the total number of lobotomies done by the 1970s is thought to have been between 40,000 and 50,000). Watts strongly protested the transformation of lobotomy into a procedure that could be done in one quick office visit---and done by a psychiatrist instead of a surgeon, no less---which caused he and Freeman to sever their partnership.
Freeman, however, was not discouraged; he became an ardent promoter of transorbital lobotomy. He traveled across the United States, stopping at mental asylums to perform the operation on any patients who seemed eligible and to train the staff to perform the surgery after he had moved on. Freeman himself is thought to have performed or supervised around 3,500 lobotomies; his patients included a number of minors and a 4-year old child (who died 3 weeks after the procedure).
Eventually, however, the popularity of transorbital lobotomy began to fade. One would like to think that this happened because people recognized how barbaric the procedure was (along with the fact that the approach was based on somewhat flimsy scientific rationale). The real reasons for abandoning the operation, however, were more pragmatic. The downfall of lobotomy began with some questions about the effectiveness of the surgery, especially in treating certain conditions like schizophrenia. It was also recognized that some types of cognition like motivation, spontaneity, and abstract thought suffered irreparably from the procedure. And the final nail in the coffin of lobotomy was the development of psychiatric drugs like chlorpromazine, which for the first time gave clinicians a pharmacological option for intractable cases of mental disorders.
It is easy for us now to look at the practice of lobotomy as nothing short of brutality, and to scoff at what seems like a tenuous scientific explanation for why the procedure should work. It's important, however, to look at such issues in the history of science in the context of their time. In an age when effective psychiatric drugs were nonexistent, psychosurgical interventions were viewed as the "wave of the future." They offered a hopeful possibility for treating disorders that were often incurable and potentially debilitating. And while the approach of lobotomy seems far too non-selective (meaning such serious brain damage was not likely to affect just one mental faculty) to us now, the idea that decreasing frontal lobe activity might reduce mental agitation was actually based on the available scientific literature at the time.
Still, it's clear that the decision to attempt to treat psychiatric disorders through inflicting significant brain damage represented a failure of logic at multiple levels. When we discuss neuroscience today, we often assume that our days of such egregious mistakes are over. And while we have certainly progressed since the time of lobotomies (especially in the safeguards protecting patients from such untested and dangerous treatments), we are not that far removed temporally from this sordid time in the history of neuroscience. Today, there is still more unknown about the brain than there is known, and thus it is to be expected that we continue to make significant mistakes in how we think about brain function, experimental methods in neuroscience, and more.
Some of these mistakes may be due simply to a natural human approach to understanding difficult problems. For example, when we encounter a complex problem we often first attempt to simplify it by devising some straightforward way of describing it. Once a basic appreciation is reached, we add to this elementary knowledge to develop a more thorough understanding---and one that is more likely to be a better approximation of the truth. However, that overly simplistic conceptualization of the subject can give birth to countless erroneous hypotheses when used in an attempt to explain something as intricate as neuroscience. And in science, these types of errors can lead a field astray for years before it finds its way back on course.
Other mistakes involve research methodology. Due to the rapid technological advances in neuroscience that have occurred in the past half-century, we have some truly amazing neuroscience research tools available to us that would have only been science fiction 100 years ago. Excitement about these tools, however, has caused researchers in some cases to begin utilizing them extensively before we are fully prepared to do so. This has resulted in using methods that cannot yet answer the questions we presume they can, and has provided us with results that we are sometimes unable to accurately interpret. In accepting the answers we obtain as legitimate and assuming our interpretations of results are valid, we may commit errors that can confound hypothesis development for some time.
Advances in neuroscience in the 20th and into the 21st century have been nothing short of mind-boggling, and our successes in understanding far outpace our long-standing failures. However, any scientific field is rife with mistakes, and neuroscience is no different. In this article, I will discuss just a few examples of how missteps and misconceptions continue to affect progress in the field of neuroscience.
The ________________ neurotransmitter
Nowadays the fact that neurons use signaling molecules like neurotransmitters to communicate with one another is one of those pieces of scientific knowledge that is widely known even to non-scientists. Thus, it may be a bit surprising that this understanding is less than 100 years old. It was in 1921 that the German scientist Otto Loewi first demonstrated that, when stimulated, the vagus nerve releases a chemical substance that can affect the activity of nearby cells. Several years later, that substance was isolated by Henry Dale and determined to be acetylcholine (at that point a substance that had already been identified, just not as a neurotransmitter). It wasn't until the middle of the 20th century, however, that it became widely accepted that neurotransmitters were used throughout the brain. The discovery of other neurotransmitters and neuropeptides would be scattered throughout the second half of the 20th century.
Of course, each time a new neurotransmitter (or other signaling molecule like a neuropeptide) is discovered, one of the first questions scientists want to answer is "what is its role in the brain?" The approach to answering this question generally involves some degree of simplification, as researchers seem to search for one overriding function that can be used to describe the neurotransmitter. Resultantly, often the first really intriguing function discovered for a neurotransmitter becomes a way of defining it.
Gradually, the discovered functions for the neurotransmitter become so diverse that it is no longer rational to attach one primary role to it, and researchers are forced to revise their initial explanations of the neurotransmitter's function by incorporating new discoveries. Sometimes it is later found that the original function linked to the neurotransmitter does not even match up well with the tasks the chemical is actually responsible for in the brain. The idea that the neurotransmitter has one main function, however, can be difficult to convince people to forget. This becomes a problem because that inaccurate conceptualization may lead to years of research seeking evidence to support a particular role for the neurotransmitter, while that hypothesized role may be misunderstood---or outright erroneous.
The neuropeptide oxytocin provides a good example case of this phenomenon. The history of oxytocin begins with the same Henry Dale mentioned above. In 1906, Dale found that oxen pituitary gland extracts could speed up uterine contractions when administered to a variety of mammals including cats, dogs, rabbits, and rats. This discovery soon led to the exploration of the use of similar extracts to assist in human childbirth; it was found they could be especially helpful in facilitating labor that was progressing slowly. Its effects on childbirth would be where oxytocin earned its name, which is derived from the Greek words for "quick birth."
Clinical use of oxytocin didn't become widespread until researchers were able to synthesize oxytocin in the laboratory. But after that occurred in the 1950s, oxytocin became the most commonly used agent to induce labor throughout the world (sold under the trade names Pitocin and Syntocinon). However, despite the fact that oxytocin plays such an important role in a significant percentage of pregnancies today, the vast majority of research and related news on oxytocin in the past few decades has involved very different functions for the hormone: love, trust, and social bonding.
This line of research can be traced back to the 1970s when investigators learned that oxytocin reached targets throughout the brain, suggesting it might play a role in behavior. Soon after, researchers found that oxytocin injections could prompt virgin female rats to exhibit maternal behaviors like nest building. Researchers then began exploring oxytocin's possible involvement in a variety of social interactions ranging from sexual behavior to aggression. In the early 1990s, some discoveries of oxytocin's potential contribution to forming social bonds emerged from an uncommon species to use as a research subject: the prairie vole.
The prairie vole is a small North American rodent that looks kind of like a cross between a gopher and a mouse. They are somewhat unremarkable animals except for one unusual feature of their social lives: they form what seem to be primarily monogamous long-term relationships with voles of the opposite sex. This is not common among mammals, as estimates are that only somewhere between less than 3% to less than 5% of mammalian species display evidence of monogamy.
A monogamous rodent species creates an interesting opportunity to study monogamy in the laboratory. Researchers learned that female prairie voles begin to display a preference for a male---a preference that can lead to a long-term attachment---after spending just 24 hours in the same cage as the male. It was also observed that administration of oxytocin made it more likely females would develop this type of preference for a male vole, and treatment with an oxytocin antagonist made it less likely. Thus, oxytocin became recognized as playing a crucial part in the formation of heterosexual social bonds in the prairie vole---a discovery that would help to launch a torrent of research into oxytocin's involvement in social bonding and other prosocial behaviors.
When researchers then turned from rodents like prairie voles to attempt to understand the role oxytocin might play in humans, research findings that suggested oxytocin acted to promote positive emotions and behavior in people began to accumulate. Administration of oxytocin, for example, was found to increase trust. People with higher levels of oxytocin were observed to display greater empathy. Oxytocin administration was discovered to make people more generous and to promote faithfulness in long-term relationships. One study even found that petting a dog was associated with increased oxytocin levels---in both the human and the dog. Due to the large number of study results indicating a positive effect of oxytocin on socialization, the hormone earned a collection of new monikers including the love hormone, the trust hormone, and even the cuddle hormone.
Excited by all of these newfound social roles for oxytocin, researchers eagerly---and perhaps impetuously---began to explore the role of oxytocin deficits in psychiatric disorders along with the possibility of correcting those deficits with oxytocin administration. One disorder that has gained a disproportionate amount of attention in this regard is autism spectrum disorder, or autism. Oxytocin deficits seemed to be a logical explanation for autism since social impairment is a defining characteristic of the disorder, and oxytocin appeared to promote healthy social behavior. As researchers began to delve into the relationship between oxytocin levels in the blood and autism, however, they did not find what seemed to be a direct relationship. Undeterred, investigators explored the intranasal administration of oxytocin---which involves spraying the neuropeptide into the nasal passages---on symptoms in autism patients. And initially, there were indications intranasal oxytocin might be effective at improving autism symptoms (more on this below).
Soon, however, some began to question if all of the excitement surrounding the "trust hormone" had caused researchers to make hasty decisions regarding experimental design, for all of the studies using the intranasal method of oxytocin administration were using a method that wasn't---and still has not been---fully validated. Researchers turned to the intranasal method of administration because oxytocin that enters the bloodstream does not appear to cross the blood-brain barrier in appreciable amounts; there are indications, however, that the neuropeptide does make it into the brain via the intranasal route. The problem, however, is that even by intranasal delivery very little oxytocin reaches the brain---according to one estimate only .005% of the administered dose. Even when very high doses are used, the amount that reaches the brain via intranasal delivery does not seem comparable to the amount of oxytocin that must be administered directly into the brain (intracereboventricularly) of an animal to influence behavior.
But many studies have indicated an effect, so what is really going on here? One possibility is that the effects are not due to the influence of oxytocin on the central nervous system, but due to oxytocin entering the blood stream and interacting with the large number of oxytocin receptors in the peripheral nervous system; if true, this would mean that exogenous oxytocin is not having the effects on the brain researchers have hypothesized. Another more concerning possibility is that many of the studies published on the effects of intranasal oxytocin suffer from methodological problems like questionable statistical approaches to analyzing data.
Indeed, criticisms of the statistical methods of some of the seminal papers in this field have been made publicly. A recent review also found that studies of intranasal oxytocin often involve small sample sizes; significant findings of small studies are more likely to be statistical aberrations and not representative of true effects. It is also probable that the whole area of research is influenced by publication bias, which is the tendency to publish reports of studies that observe significant results while neglecting to publish reports of studies that fail to see any notable effects. This may seem like a necessary evil, as journal readers are more likely to be interested in learning about new discoveries than experiments that yielded none. Ignoring non-significant findings, though, can lead to the exaggeration of the importance of an observed effect because the available literature may seem to indicate no conflicting evidence (even though such evidence might exist hidden away in the file drawers of researchers throughout the world).
These potential issues are underscored by the inconsistent research results and failed attempts at replicating, or repeating, studies that have reported significant effects of intranasal oxytocin. For example, one of the most influential studies on intranasal oxytocin, which found that oxytocin increases trust, has failed to replicate several times. And in many cases, null research findings have emerged after initial reports indicated a significant effect. The findings of the early autism studies mentioned above, for example, have been contradicted by multiple randomized controlled trials (see here and here) conducted in the last few years, which reported a lack of a significant therapeutic effect.
Not surprisingly, over the years the simple understanding of oxytocin as a neuropeptide that promotes positive emotions and behavior has also become more complicated as it was learned that the effects of oxytocin might not always be so rosy. In one study, for example, researchers observed intranasal oxytocin to be associated with increased envy and gloating. Another study found oxytocin increased ethnocentrism, or the tendency to view one's own ethnicity or culture as superior to others. And in a recent study, intranasal administration of oxytocin increased aggressive behavior. To add even more complexity to the picture, the effects of oxytocin may not be the same in men and women and may even be disparate in different individuals and different environmental contexts.
In an attempt to explain these discordant findings, researchers have proposed new interpretations of oxytocin's role in social behavior. One hypothesis, for example, suggests that oxytocin is involved in promoting responsiveness to any important social cue---whether it be positive (e.g. smiling) or negative (e.g. aggression); this is sometimes called the "social salience" hypothesis. Despite such recent efforts to reconcile the seemingly contradictory findings in oxytocin research, however, there is still not a consensus as to the effects of oxytocin, and the hypothesis that oxytocin is involved in positive social behavior continues to guide the majority of the research in this area.
Thus, for years now oxytocin research has centered on a role for the neuropeptide that is at best sensationalized and at worst deeply flawed. And oxytocin is only the most recent example of this phenomenon. In the 1990s, dopamine earned a reputation as the "pleasure neurotransmitter." Soon after, serotonin became known as the "mood neurotransmitter." These appellations were based on the most compelling discoveries linked to these neurotransmitters: dopamine is involved in processing rewarding stimuli, and serotonin is targeted by treatments for depression.
However, now that we know more about these substances, it is clear these short definitions of functionality are much too simplistic. Not only are dopamine and serotonin involved in much more than reward and mood, respectively, but also the roles of these two neurotransmitters in reward and mood seem to be very complicated and poorly understood. For example, most researchers no longer think dopamine signaling causes pleasure, but that it is involved with other intricacies of memorable experiences like the identification of important stimuli in the environment---whether they be positive (i.e. rewarding) or negative. Likewise, that serotonin levels alone don't determine mood is now common knowledge in scientific circles (and is finding its way into public perception as well). Thus, these short, easy-to-remember titles are misleading---and somewhat useless.
In assigning one function to one neurotransmitter or neuropeptide, we overlook important facts like the understanding that these neurochemicals often have multiple receptor subtypes they act at, sometimes with drastically different effects. And we neglect to consider that different areas of the brain have different levels of receptors for each neurochemical, and may be preferentially populated with one receptor subtype over another---leading to different patterns of activity in different brain regions with different functional specializations. Add to that all of the downstream effects of receptor activation (which can vary significantly depending on the receptor subtype, area of the brain it is found, etc.), and you have an extremely convoluted picture. Trying to sum it up in one function is ludicrous.
Not only do these simplifying approaches hinder a more complete understanding of the brain, they lead to the wasting of countless hours of research and dollars of research funding pursuing the confirmation of ideas that will likely have to be replaced eventually with something more elaborate. Regardless, this type of simplification in science does seem to serve a purpose. Our brains gravitate towards these straightforward ways of explaining things---possibly because without some comprehensible framework to start from, understanding something as complex as the brain seems like a Herculean task. However, if we are going to utilize this approach, we should at least do it with more awareness of our tendency to do so. By recognizing that, when it comes to the brain, the story we are telling is almost always going to be much more complicated than we are inclined to believe, perhaps we can avoid committing the mistakes of oversimplification we have made in the past.
Psychotherapeutic drugs and the deficits they correct
Before the 1950s, the treatment of psychiatric disorders looked very different from how it does today. As discussed above, unrefined neurosurgery like a transorbital lobotomy was considered a viable approach to treating a variety of ailments ranging from panic disorder to schizophrenia. But lobotomy was only one of a number of potentially dangerous interventions used at the time that generally did little to improve the mental health of most patients. Pharmacological treatments were not much more refined, and often involved the use of agents that simply acted as strong sedatives to make a patient's behavior more manageable.
The landscape began to change dramatically in the 1950s, however, when a new wave of pharmaceutical drugs became part of psychiatric treatment. The first antipsychotics for treating schizophrenia, the first antidepressants, and the first benzodiazepines to treat anxiety and insomnia were all discovered in this decade. Some refer to the 1950s as the "golden decade" of psychopharmacology, and the decades that followed as the "psychopharmacological revolution," since over this time the discovery and development of psychiatric drugs would progress exponentially; soon pharmacological treatments would be the preferred method of treating psychiatric illnesses.
The success of new psychiatric drugs over the second half of the twentieth century came as something of a surprise because the disorders these drugs were being used to treat were still poorly understood. Thus, drugs were often discovered to be effective through a trial and error process, i.e. test as many substances as we can and eventually maybe we'll find one that treats this condition. Because of how little was understood about the biological causes of these disorders, if a drug with a known mechanism was found to be effective in treating a disorder with an unknown mechanism, often it led to a hypothesis that the disorder must be due to a disruption in the system affected by the drug.
Antidepressants serve as a prime example of this phenomenon. Before the 1950s, a biological understanding of depression was essentially nonexistent. The dominant perspective of the day on depression was psychoanalytic---depression was caused by internal conflicts among warring facets of one's personality, and the conflicts were generally considered to be created by the internalization of troublesome or traumatic experiences that one had gone through earlier in life. The only non-psychoanalytic approaches to treatment involved poorly understood and generally unsuccessful procedures like electroconvulsive shock therapy---which was actually effective in certain cases, but potentially dangerous in others---and treatments like barbiturates or amphetamines, which didn't seem to target anything specific to depression but instead caused widespread sedation or stimulation, respectively.
The first antidepressants were discovered serendipitously. The story of iproniazid, one of the first drugs marketed specifically for depression---the other being imipramine, which underwent discovery and the first clinical uses at around the same time as iproniazid---is a good example of the serendipity involved. In the early 1950s, researchers were working with a chemical called hydrazine and investigating its derivatives for anti-tuberculosis properties (tuberculosis was a scourge at the time and it was routine to test any chemical for its potential to treat the disease). Interestingly, hydrazine derivatives may never have even been tested if the Germans hadn't used hydrazine as rocket fuel during World War II, causing large surpluses of the substance to be found at the end of the war and then sold to pharmaceutical companies on the cheap.
In 1952, a hydrazine derivative called iproniazid was tested on tuberculosis patients in Sea View Hospital on Staten Island, New York. Although the drug did not seem to be superior to other anti-tuberculosis agents in treating tuberculosis, a strange side effect was noted in these preliminary trials: patients who took iproniazid displayed increased energy and significant improvements in mood. One researcher reported that patients were "dancing in the halls 'tho there were holes in their lungs." Although at first largely overlooked as "side effects" of iproniazid treatment, eventually researchers became interested in the mood-enhancing effect of the drug in and of itself; before the end of the decade, the drug was being used to treat patients with depression.
Around the same time as the discovery of the first antidepressant drugs, a new technique called spectrophotofluorimetry was being developed. This technique allowed researchers to detect changes in the levels of neurotransmitters called monoamines (e.g. dopamine, serotonin, norepinephrine) after the administration of drugs (like iproniazid) to animals. Spectrophotofluorimetry allowed researchers to determine that iproniazid and imipramine were having an effect on monamines. Specifically, the administration of these antidepressants was linked to an increase in serotonin and norepinephrine levels.
This discovery led to the first biological hypothesis of depression, which suggested that depression is caused by deficiencies in levels of serotonin and/or norepinephrine. At first, this hypothesis focused primarily on norepinephrine and was known as the "noradrenergic hypothesis of depression." Later, however---due in part to the putative effectiveness of antidepressant drugs developed to more specifically target the serotonin system---the emphasis would be placed more on serotonin's role in depression, and the "serotonin hypothesis of depression" would become the most widely accepted view of depression.
The serotonin hypothesis would go on to be endorsed not only by the scientific community, but also---thanks in large part to the frequent referral to a serotonergic mechanism in pharmaceutical ads for antidepressants---by the public at large. It would guide drug development and research for years. As the serotonin hypothesis was reaching its heyday, however, researchers were also discovering that it didn't seem to tell the whole story of the etiology of depression.
A number of problems with the serotonin hypothesis were emerging. One was that antidepressant drugs took weeks to produce a therapeutic benefit, but their effects on serotonin levels seemed to occur within hours after administration. This suggested that, at the very least, some mechanism other than increasing serotonin levels was involved in the therapeutic effects of the drugs. Other research that questioned the hypothesis began to accumulate as well. For example, experimentally depleting serotonin in humans was not found to cause depressive symptoms.
There is now a long list of experimental findings that question the serotonin and noradrenergic hypotheses (indeed, the area of research is muddied even further by evidence suggesting antidepressant drugs may not even be all that effective). Clearly, changes in monoamine levels are an effect of most antidepressants, but it does not seem that there is a direct relationship between serotonin or norepinephrine levels and depression. At a minimum, there must be another component to the mechanism.
For example, some have proposed that increases in serotonin levels are associated with the promotion of neurogenesis (the birth of new neurons) in the hippocampus, which is an important brain region for the regulation of the stress response. But recently researchers have also begun to deviate significantly from the serotonin hypothesis, suggesting bases for depression that are different altogether. One more recent hypothesis, for instance, focuses on a role for the glutamate system in the occurrence of depression.
The serotonin hypothesis of depression is just one of many hypotheses of the biological causes of psychiatric disorders that were formulated based on the assumption that the primary mechanism of a drug that treats the disorder must be correcting the primary dysfunction that causes the disorder. The same logic was used to devise the dopamine hypothesis of schizophrenia. And the low arousal hypothesis of attention-deficit hyperactivity disorder. Both of these hypotheses were at one point the most commonly touted explanations for schizophrenia and ADHD, respectively, but now are generally considered too simplistic (at least in their original formulations).
The logic used to construct such hypotheses is somewhat tautological: drug A increases B and treats disorder C, thus disorder C is caused by a deficiency in B. It neglects to recognize that B may be just one factor influencing some downstream target, D, and thus the effects of the drug may be achieved in various ways, which B is just one of. It fails to appreciate the sheer complexity of the nervous system, and the multitudinous factors that likely are involved in the onset of psychiatric illness. These factors include not just neurotransmitters, but also hormones, genes, gene expression, aspects of the environment, and an extensive list of other possible influences. The complexity of psychiatry likely means there are an almost inconceivable number of ways for a disorder like depression to develop, and our understanding of the main pathways involved is likely still at a very rudimentary level.
Thus, when we simplify such a complex issue to rest primarily upon the levels of one neurotransmitter, we are making a similar type of mistake as discussed in the first section of this article, but perhaps with even greater repercussions. For the errors that result from simplifying psychiatric disorders into "one neurotransmitter" maladies are errors that affect not just progress in neuroscience, but also the mental and physical health of patients suffering from these disorders. Many of these patients are prescribed psychiatric drugs with the assumption their disorder is simple enough to be fixed by adjusting some "chemical imbalance"; perhaps it is not surprising then that psychiatric drugs are ineffective in a surprisingly large number of patients. And many patients continue taking such drugs---sometimes with minimal benefit---despite experiencing significant side effects. Thus, there is even more imperative in this area to move away from searching for simple answers based on known mechanisms and venture out into more intimidating and unknown waters.
Our faith in functional neuroimaging
As approaches to creating images of brain activity like positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) were developed in the second half of the twentieth century, they understandably sparked a great deal of excitement among neuroscientists. These methods allowed neuroscience to achieve something once thought to be impossible---the ability to see what was happening in the brain in (close to) real time. By monitoring cerebral blood flow using a technique like fMRI, one can tell which brain areas are receiving the most blood---and by extension which areas are most active neuronally---when someone is performing some action (e.g. completing a memory task, thinking about a loved one, viewing pictures of rewarding or aversive stimuli, etc.).
This method of neuroimaging, which finally allowed researchers to draw conclusions about elusive connections between structure and function, was dubbed functional neuromaging. Functional neuroimaging methods have predictably become some of the most popular research tools in neuroscience over the last few decades. fMRI surpassed PET as a preferred tool for functional neuroimaging soon after it was developed (due to a variety of factors including better spatial resolution and a less invasive approach), and it has been the investigative method of choice in over 40,000 published studies since the 1990s.
The potential of functional neuroimaging---and fMRI in particular---to unlock countless secrets of the brain intrigued not only investigators but also the popular press. The media quickly realized that the results of fMRI studies could be simplified, combined with some colorful pictures of brain scans, and sold to the public as representative of huge leaps in understanding which parts of the brain are responsible for certain behaviors or patterns of behaviors. The simplification of these studies led to incredible claims that intricate patterns of behavior and emotion like religion or jealousy emanated primarily from one area of the brain.
Fortunately, this wave of sensationalism has died down a bit as many neuroscientists have been vocal about how this type of oversimplification is taken so far that it propagates untruths about the brain and misrepresents the capabilities of functional neuroimaging. The argument against oversimplifying fMRI results, though, is often an argument against oversimplification itself. The assumption is that the methodology is not flawed, but the interpretation is. More and more researchers, however, are asserting that not only are the reported results of neuroimaging experiments ripe for misinterpretation, but also they are often simply inaccurate.
One major problem with functional neuroimaging involves how the data from these experiments are handled. In fMRI, for example, the device creates a representation of the brain by dividing an image of the brain into thousands of small 3-D cubes called voxels. Each voxel can represent the activity of over a million neurons. Researchers must then analyze the data to determine which voxels are indicative of higher levels of blood flow, and these results are used to determine which areas of the brain are most active. Most of the brain is active at all times, however, so researchers must compare activity in each voxel to activity in that voxel during another task to determine if blood flow in a particular voxel is higher during the task they are interested in.
Due to the sheer volume of data, an issue arises with the task of deciding whether blood flow observed in a particular voxel is representative of activity above a baseline. Each fMRI image can consist of anywhere from 40,000 to 500,000 voxels, depending on the settings of the machine, and each experiment involves many images (sometimes thousands), each taken a couple of seconds apart. This creates a statistical complication called the multiple comparisons problem, which essentially states that if you perform a large number of tests, you are more likely to find one significant result simply by chance than if you performed just one test.
For example, if you flip a coin ten times it would be highly unlikely you would get tails nine times. But, if you flipped 50,000 coins ten times, you would be much more likely to see that result in at least one of the coins. That coin flip result is what, in experimental terms, we would call a false positive. If you're using a typical coin, getting nine tails out of ten flips doesn't tell you something about the inherent qualities of the coin---it's just a statistical aberration that occurred by chance. The same type of thing is more likely to occur when a researcher makes the sometimes millions of comparisons (between active and baseline voxels) involved with an fMRI study. By chance alone, it's likely some of them will appear to indicate a significant level of activity.
This problem was exemplified through an experiment conducted by a group of researchers in 2009 that involved an fMRI scan of a dead Atlantic salmon (yes, the fish). The scientists put the salmon in an fMRI scanner and showed the fish a collection of pictures depicting people engaged in different social situations. They went so far as to ask the salmon---again, a dead fish---what emotion the people in the photographs were experiencing. When the researchers analyzed their data without correcting for the multiple comparisons problem, they observed the miraculous: the dead fish appeared to display brain activity that indicated it was "thinking" about the emotions being portrayed in the photographs. Of course this wasn't what was really going on; instead, it was that the false positives emerging due to the multiple comparisons problem made it appear as if there was real activity occurring in the fish's brain when obviously there was not.
The salmon experiment shows how serious a concern the multiple comparisons problem can be when it comes to analyzing fMRI data. The problem, however, is a well-known issue by now, and most researchers correct for it in some way when statistically analyzing their neuroimaging data. Even today, however, not all do---a 2012 review of 241 fMRI studies found that the authors of 41% of them did not report making any adjustments to account for the multiple comparisons problem. Even when conscious attempts to avoid the multiple comparisons problem are made, though, there is still a question of how effective they are at producing reliable results.
For example, one method for dealing with the multiple comparisons problem that has become popular among fMRI researchers is called clustering. In this approach, only when clusters of contiguous voxels are active together is there enough cause to consider a region of the brain more active than baseline. Part of the rationale here is that if a result is legitimate, it is more likely to involve aggregates of active voxels, and so by focusing on clusters instead of individual voxels one can reduce the likelihood of false positives.
The problem with clustering is that it doesn't always seem to work that well. For example, a study published this year analyzed fMRI data from close to 500 subjects using three of the most popular fMRI software packages and found that one common approach to clustering still led to a false positive rate of up to 70%. So, even when researchers take pains to account for the multiple comparisons problem, the results often don't seem to inspire confidence that the effect observed is real and not just a result of random fluctuations in brain activity.
This isn't to say that no fMRI data should be trusted, or that fMRI shouldn't be used to explore brain activity. Rather, it suggests that much more care needs to be taken to ensure fMRI data is managed properly to avoid making erroneous conclusions. Unfortunately, however, the difficulties with fMRI don't begin and end with the multiple comparisons problem. Many fMRI studies also suffer from small sample sizes. This makes it more difficult to detect a true effect, and when some effect is observed it also means it is more likely to be a false positive. Additionally, it means that when a true effect is observed, the size of the effect is more likely to be exaggerated. Some researchers have also argued that neuroimaging research suffers from publication bias, which further inflates the importance of any significant findings because conflicting evidence may not be publicly available.
All in all, this suggests the need for more caution when it comes to conducting and interpreting the results of fMRI experiments. fMRI is an amazing technology that offers great promise in helping us to better understand the nervous system. However, functional neuroimaging is a relatively young field, and we are still learning how to properly use techniques like fMRI. It's to be expected, then---as with any new technology or recently developed field---that there will be a learning curve as we develop an appreciation for the best practices in how to obtain data and interpret results. Thus, while we continue to learn these things, we should use considerable restraint and a critical eye when assessing the results of functional neuroimaging experiments.
Progress in neuroscience over the past several centuries has changed our understanding of what it means to be human. Over that time, we learned that the human condition is inextricably connected to this delicate mass of tissue suspended in cerebrospinal fluid in our cranium. We discovered that most afflictions that affect our behavior originate in that tissue, and then we started to figure out ways to manipulate brain activity---through the administration of various substances both natural and man-made---to treat those afflictions. We developed the ability to observe activity in the brain as it occurs, making advances in understanding brain function that humans were once thought to be incapable of. And there are many research tools in neuroscience that are still being refined, but which hold the promise of even greater breakthroughs over the next 50 years.
The mistakes made along the way are to be expected. As a discipline grows, the accumulation of definitive knowledge does not follow a straight trajectory. Rather it involves an accurate insight followed by fumbling around in the dark for some time before making another truthful deduction. Neuroscience is no different. Although we have a tendency to think highly of our current state of knowledge in the field, chances are at any point in time it is still infested with errors. The goal is not to achieve perfection in that sense, but simply to remain cognizant of the impossibility of doing so. By recognizing that we never know as much as we think we know, and by frequently assessing which approaches to understanding are leading us astray, we are more likely to arrive at an approximation of the truth.
References (in addition to linked text above):
Finger, S. Origins of Neuroscience. New York, NY: Oxford University Press; 1994.Lopez-Munoz, F., & Alamo, C. (2009). Monoaminergic Neurotransmission: The History of the Discovery of Antidepressants from 1950s Until Today Current Pharmaceutical Design, 15 (14), 1563-1586 DOI: 10.2174/138161209788168001
Valenstein, ES. Great and Desperate Cures: The Rise and Decline of Psychosurgery and other Radical Treatments for Mental Illness. New York, NY: Basic Books, Inc.; 1986.
If you enjoyed this article, check out: