Neuromyths and the disconnect between science and the public

When the movie Lucy was released in the summer of 2014, it was quickly followed by a flurry of attention surrounding the idea that we only use 10% of our brains. According to this perspective, around 90% of our neurons lie dormant, all the while teasing us by reminding us that we have only achieved a small fraction of our human potential. In the movie, Scarlet Johansson plays a woman who takes an experimental new drug that makes her capable of using 100% of her brain. Due to this sudden enhancement in brain utilization, she develops unprecedented cognitive abilities, as well as some extrasensory capabilities like telepathy and telekinesis. Thus, the plot of the movie actually hinges on the accuracy of the 10% of the brain idea. Unfortunately, it is an idea that has at this point been thoroughly debunked. In truth, it appears that all of our brain is active fairly constantly. Although some areas may be more active at times than others, there are no areas that ever go completely dark.

Despite this understanding of full-brain utilization being widely endorsed throughout the scientific community, the 10% of the brain myth is still accepted as true by a significant proportion of the public (which might explain why Lucy was able to rake in close to $460 million worldwide). In fact, a recent survey of educators in five different countries--the United Kingdom, The Netherlands, Turkey, Greece, and China--found that the percentage of teachers who believe in the 10% of the brain myth ranges from a low of 43% in Greece to a high of 59% in China.

The same survey identified a number of other inaccurate beliefs about the brain held by educators--beliefs that have come to be categorized as neuromyths. For example, 44-62% of teachers believed that consuming sugary drinks and snacks would be likely to affect the attention level of their students. The idea that sugar intake can decrease children's attention and increase their hyperactivity has been around since the 1970s. The first formal study to identify a relationship between sugar and hyperactivity was published in 1980, but it was an observational study without the ability to make any determination of a causal relationship. Since then more than a dozen placebo-controlled studies have been conducted, but a relationship between sugar and hyperactivity has not been supported. In fact, some studies found sugar to be associated with decreased activity. Yet, if you spend an afternoon at a toddler's birthday party, your chances of overhearing a parent attributing erratic toddler behavior to cake are around 93% (okay, that's a made-up statistic but the chances are high).

According to the survey, a large percentage (ranging from 71% in China to 91% in the U.K.) of teachers in the countries mentioned above also believe that hemispheric dominance is an important factor in determining individual differences in learning styles. In other words, they believe that some people think more with their "left brain" and others more with their "right brain," and that this lateralization of brain function is reflected in personalities and learning styles. The concept of hemispheric dominance has been an oft-discussed one in neuroscience since the 1860s when Paul Pierre Broca identified a dominant role for the left hemisphere (in most individuals) in language. Since the middle of the twentieth century, however, the concept of hemispheric dominance has been extrapolated to a number of functions other than language. Many now associate the right side of the brain with creativity and intuitive thinking, while the left side of the brain is linked to analytical and logical thought. By extension, people who are creative are sometimes said to be more "right-brained," while those who are more analytical are said to be "left-brained."

These broad characterizations of individuals using one side of their brain more than the other, however, are not supported by research. Although there are certain behaviors that seem to rely more on one hemisphere than the other--language and the left hemisphere being the best example--overall the brain functions as a whole and in general an individual doesn't preferentially use one hemisphere more based on his personality. Still, this myth has become so pervasive that a recommendation that children be identified as right- or left-brained and teaching approaches be modified accordingly has even made its way into educational texts.

Where do neuromyths come from?

Neuromyths are not generally created nor spread with malicious intent. Although there may be instances where inaccurate neuroscience information is used by entrepreneurs hoping to convince the public of the viability of a dubious product, usually neuromyths arise out of some genuine scientific confusion. For example, the misconception that sugar causes hyperactivity in children was bolstered by a study that did detect such an effect. The scientific status of the hypothesis, however, was forced to remain in limbo for a decade until more studies could be conducted. Those subsequent, better-designed studies failed to find a relationship, but by that time the myth had taken on a life of its own. The fact that it was so easy for parents to mistakenly attribute poor behavior to a previous sugary treat helped to sustain and propagate the inaccuracy.

So, some neuromyths are born from a published scientific finding that identifies a potential relationship and grow simply because it then takes time--during which faulty information can spread--to verify such a finding. Many myths also arise, however, from inherent biases--both personal and cultural--or the misinterpretation of data. At times, the sheer complexity of the field of neuroscience may be a contributing factor, as it may cause people to seek out overly simplistic explanations of how the brain works. These oversimplifications can be alluring because they are easy to understand, even if at times they are not fully accurate. For example, the explanation of depression--a disorder with a complex, and still not understood etiology--as being attributable to an imbalance of one neurotransmitter (i.e. serotonin) was popular in scientific and public discourse for decades before being recognized as too simplistic. 

After the scientific community has untangled any confusion that may have led to the creation of a neuromyth, it still takes quite a long time for the myth to die out. In part, this is because there is a disconnect between the information that is readily available to the public and that which is accessible to the scientific community. Most of the general public do not read academic journals. So, if several studies that serve to debunk a neuromyth are published over the course of a few years, the public may be unaware of these recent developments until the findings are communicated to them in some other form (e.g. a website or magazine that has articles on popular science topics).

Eventually this knowledge does find its way into the public discourse, though, it's just a question of when. The 10% of the brain myth, for example, has probably lasted for at least a century. But by 2014 enough of the non-scientific community was aware of the inaccuracies in the plot of Lucy to raise something of an uproar about them. There are other neuromyths, however, that have more recently become part of public knowledge, and thus we are likely to see them come up again and again over the next several years before widespread appreciation of their erroneousness emerges.

The myth of three, a current neuromyth

One example of a (relatively) recently espoused neuromyth is sometimes referred to as the "myth of three." The underlying idea of the myth of three is that there is a critical period between birth and three years of age, during which most of the major events of early brain development occur. This is a time when there is extensive synaptogenesis--a term for the formation of new connections between neurons--occurring. According to the myth of three, if the external environment during this time isn't conducive to learning or healthy brain development, the effects can range from a missed--and potentially lost--opportunity for cognitive growth to irreversible damage. Conversely, by enriching a child's environment during these first three years, you can increase the chances he will grow up to be the next Doogie Howser, MD. In other words, ages 0 to 3 represent the most important years of a child's learning life and essentially determine the course for his or her future cognitive maturation. Hillary Clinton summarized the myth of three appropriately while speaking to a group of educators in 1997 when she said: “It is clear that by the time most children start preschool, the architecture of the brain has essentially been constructed."

There are several problems with the myth of three. The first is with the assumption that age 0-3 is the most critical learning period in our lifetime. This assumption is based primarily on evidence of high levels of synaptogenesis during this time, but it is not clear that this provides convincing support for the unrivaled importance of 0-3 as a learning period. Synaptic density does reach a peak during this time, but it then remains at a high level until around puberty, and some skills that we begin to learn between ages 0 to 3 continue to be improved and refined over the years. In fact, with certain skills (e.g. problem solving, working memory) it seems we are unable to attain true intellectual maturity until we have spent years developing them. Thus, it is questionable if high levels of synaptogenesis are adequate evidence to suggest age 0-3 is the most critical window for learning that we have in our lives.

Also, it is not clear that a disruption in learning during the "critical period" of age 0-3 would have widespread effects on brain function. There are critical or sensitive periods that have been identified for certain brain functions, and some--but not all--rely on external stimulation for normal development to occur. For example, there are aspects of vision that depend on stimulation from the external environment to develop adequately. Indeed, for a complex ability like vision there are thought to be different critical periods for different specific functions like binocular vision or visual acuity. Some of these critical periods do occur between birth and age three. However, some extend well past age 3 and do not correlate well with the period of high synaptogenesis thought to be so important by those who originally advocated the myth of three. Because these critical periods significantly differ by function, it is inaccurate to refer to age 0-3 as a critical period for brain development in general. Thus, deprivation of stimulation during this age range has the potential to affect certain functions, depending on the specific time and severity of the deprivation, but the type of widespread cognitive limitations implied by the myth of three do not seem likely to occur. Additionally, much of the data used to support the assertions of the critical period of 0-3 years involves severe sensory deprivation rather than missed learning opportunities, even though the myth of three has more frequently been used to warn us of the ramifications of the latter.

Furthermore, a dearth of stimulation or lack of learning during a critical or sensitive period doesn't always translate into an irreparable deficit. For example, if a baby is not exposed to a language before 6 months of age, she will have a harder time distinguishing the eccentricities in speech sounds that make up the language. However, the fact that many adults are able to acquire a second language without having been exposed to it before 6 months of age suggests this doesn't translate into an inability to ever learn the language; it just makes learning that language more difficult. So, even when the environmental conditions aren't conducive to the development of a specific function, it seems our brain is still capable of rescuing that function if learning is resumed later in life.

Additionally, while improving the environment of a child growing up in some form of severe deprivation is beneficial, it is not clear how much enriching the environment of a child already growing up in good conditions will hasten brain development. Yet, this principle forms the cornerstone of a multimillion dollar industry. That industry markets to parents, advertising ways they can raise their infants' IQs by exposing them to things like Baby Mozart CDs, with the hopes that exposure to the music of Mozart will in some subliminal way lay the foundation for more rapid intellectual growth. Of course, there could be more harmful ramifications of misunderstood science than making parents more invested in their child's intellectual development. But sometimes that investment comes with a good dose of anxiety, high expectations, and wasted resources that could have been better used in other ways.

The myth of three, however, was not started by companies marketing goods to produce baby geniuses, this only helped to propagate it. The myth of three is a good example of how legitimate scientific confusion can engender the development of a neuromyth. Our understanding of early brain development, sensitive periods, and the best age for educational interventions is still evolving, and the details continue to be debated. And, because early interventions do seem to be beneficial for children raised in impoverished conditions, there was a plausible reason people expected environmental enrichment could augment the development of already healthy children. The fact that the myth is partially based in truth and that some of the answers are not yet clear makes it likely that this will be an especially persistent belief.

Where do we go from here?

As can be seen from the proliferation of the myth of three, it seems our awareness of the potential for neuromyths to develop is not enough to stop them from doing so. So how can we at least reduce the impact of these inaccurate beliefs? One way is by improving the neuroscientific literacy of our public; a step toward accomplishing this involves increasing the neuroscientific knowledge of our educators. There is a new field emerging to address the disconnect between recent neuroscience research and the information possessed by educators, although it is so new that is still awaiting a name (neuroeducation is one possibility).

Hopefully, as the field of neuroscience itself also grows, exposure to and understanding of neuroscientific topics among the public will increase. This may make it more difficult for ideas like the 10% of the brain myth to maintain a foothold. It may be impossible to fully eradicate the existence of neuromyths, as they are often based on legitimate scientific discoveries that are later found to be specious (and of course we would need scientific perfection to ensure that false leads are never created). However, awareness of the potential for erroneous beliefs to spread along with an increased understanding of how the brain works may serve to decrease the prevalence of neuromyths.

Bruer, JT. (1998). The brain and child development: time for some critical thinking. Public Health Reports, 113 (5), 388-397.

Howard-Jones, P. (2014). Neuroscience and education: myths and messages Nature Reviews Neuroscience, 15 (12), 817-824 DOI: 10.1038/nrn3817

Early brain development and heat shock proteins

Early nervous system development.

Early nervous system development.

The brain development of a fetus is really an amazing thing. The first sign of an incipient nervous system emerges during the third week of development; it is simply a thickened layer of tissue called the neural plate. After about 5 more days, the neural plate has formed an indentation called the neural groove, and the sides of the neural groove have curled up and begun to fuse together (see pic to the right). This will form the neural tube, which will eventually become the brain and spinal cord. By around 10 weeks, all of the major structures of the brain are discernible, even if they are not yet fully mature. So, in a matter of two months, the framework for the human brain is built from scratch. If that doesn't put you in awe of nature, nothing will.

Although the process of neural development is amazing, it is also very sensitive. There are indications that a number of environmental exposures during prenatal development may increase the risk of disorders like autism, schizophrenia, and epilepsy. Some of these dangerous environmental exposures are well known (e.g. alcohol consumption during pregnancy increasing the risk of developing fetal alcohol syndrome). However, there are a number of other factors whose detrimental effects on fetal neural development are still debated or have not yet been fully elucidated. For example, the effects on a fetus of substances like phthalates (plasticizers that are likely found in a number of products throughout your home), bisphenol A (another substance used in the production of plastics - found frequently in food and drink containers), and even tobacco smoke, are still being investigated. But a pregnancy free from exposure to any potentially harmful substances doesn't guarantee normal neural development. Even factors that are natural and more difficult to control, like maternal infection during pregnancy, are suspected of being detrimental in some cases.

To complicate the issue even further, it is difficult to predict who will be affected by these environmental insults and who will not. It seems that there may be a genetic susceptibility to neurodevelopmental damage that causes a particular exposure to be detrimental to one fetus, while it may not have a major impact on another with a different genetic makeup. This complication, however, also provides an opportunity to learn more about the etiology of neurodevelopmental disorders. For, if we can learn what mechanism is failing in the fetus who is affected, but functioning in the fetus who is not, then our understanding of the origin of these disorders will be drastically improved.

In a paper published last week in Neuron, Hashimoto-Torii et al. approached the problem from this angle and examined the role of heat shock proteins in neurodevelopmental problems. Heat shock proteins are peptides whose expression is increased during times of stress. They earned their name when it was discovered in the early 1960s that high levels of heat increased their expression in Drosophila (fruit flies). Since, it has been learned that heat shock protein expression is increased during all sorts of stress, including infection, starvation, hypoxia (lack of oxygen), and exposure to toxins like alcohol. Thus, some also refer to heat shock proteins as stress proteins.

To investigate the role of heat shock proteins in neurodevelopmental disorders, Hashimoto-Torii et al. exposed mouse embryos to three different types of environmental insults. They injected pregnant mice with either alcohol, methylmercury, or a seizure-inducing drug. Then, they looked to see how the brains of the embryos reacted. As they hypothesized, they saw a significant increase in the expression of a transcription factor (heat shock factor 1 or HSF1) that promotes the production of heat shock proteins.

When the researchers investigated the effects of prenatal exposure to the insults listed above in mice who lacked an HSF1 gene (HSF1 knockout mice), they saw that the exposed moms had smaller litters than control mice. The mice that were born, however, also displayed malformations consistent with neurodevelopmental damage, greater susceptibility to seizures after birth, and reduced brain size. The reduction in brain volume seemed to be due to decreased neurogenesis after the insult.

To make a clearer connection between heat shock protein activation and human disease, the researchers exposed stem cells derived from schizophrenic patients to methylmercury and alcohol, and compared the response of the "schizophrenic cells" to the response of cells from non-schizophrenic (control) patients. They didn't see an overall difference in heat shock protein expression between the two types of cells, but they did see significant variability in expression among the schizophrenic cells. In other words, both schizophrenic and control cells increased expression of heat shock protein after an insult, but some of the schizophrenic cells appeared to increase expression more or less than others. The control cells all displayed a relatively similar increase in expression. This suggests that there may be an abnormal response involving heat shock proteins in individuals with a certain genetic predisposition; perhaps this abnormal response makes the individual more susceptible to disrupted neurodevelopment.

Thus, the study by Hashimoto-Torii et al. points to heat shock proteins as a potential culprit behind what goes wrong in early brain development to lead to psychiatric disorders like schizophrenia and autism. More research will need to be done, however, to verify this role for heat shock proteins. And, even if future research supports this finding, it is likely that heat shock proteins are still only part of the puzzle. But the puzzle is complex, and so we will need to add many of these little pieces before we can begin to comprehend the whole picture.


Hashimoto-Torii, K., Torii, M., Fujimoto, M., Nakai, A., El Fatimy, R., Mezger, V., Ju, M., Ishii, S., Chao, S., Brennand, K., Gage, F., & Rakic, P. (2014). Roles of Heat Shock Factor 1 in Neuronal Response to Fetal Environmental Risks and Its Relevance to Brain Disorders Neuron DOI: 10.1016/j.neuron.2014.03.002

Hox Genes and Neurodevelopment

In the 1980s, scientists knew surprisingly little about the role genes play in the development of an embryo. The discovery of a particular group of genes, however, known as Hox genes, drastically improved our understanding of embryology. At the same time it revolutionized genetics and developmental biology.

In the 1890s, an English biologist named William Bateson was repeatedly amazed when he came across “freaks” of nature in his studies. These included examples like a moth born with wings where its legs should be, or an insect born with legs for antennae. In 1915, another biologist, Calvin Bridges, gave a name to these aberrations, calling them homeosis (meaning the transformation of one body part into another). Bridges had noticed homeosis in fruit flies that were born with an extra pair of wings. Intrigued, he kept this strain alive through selective mating.

In the 1980s, scientists were finally able to isolate the gene that was causing the extra wing mutation in the fruit fly. They traced it back to a small group of genes, which they called Hox genes. They found that, by manipulating these genes, they could create virtual monsters, such as flies with legs that came bursting out of the middle of their heads.

The creation of these monsters, however, helped to elucidate the function of Hox genes. Hox is short for homeobox, which is the name for the DNA sequence that these genes have in common. Hox genes become active in early embryonic development. Their job is to designate which parts of the embryo will turn into which body parts (legs, wings, head, etc.). Hox genes are so specific that, if one that controls limb development is transplanted to the head of the embryo, a limb will grow out of the head.

Scientists began to find these types of master control genes in every embryo, regardless of the organism. Even more surprisingly, the genes are considerably similar across species. Scientists found they could replace a defective Hox gene in a fly with one from a mouse without any ill effects. Hox genes and other master control genes are present in humans as well, and play the same role in embryonic development. This congruity across species indicates that Hox and master control genes are probably an ancient evolutionary mechanism, developed before much speciation took place, but still present and active.

While understanding Hox and master control genes has led to great advancements in the comprehension of embryonic development, the development of the brain has still remained a little unclear. Specifically, scientists have had trouble figuring out how specialized neurons in our brain are formed in one region, then migrate to the areas they eventually have to settle in in order to function properly.

A study published online this week in PloS Biology may shed some light on the issue, however, and Hox genes are an important part of the explanation. The authors of the study investigated pontine (from the pons) neurons in mice. Pontine neurons are formed in the rear of the brain and then must migrate in the brainstem to eventually become part of the precerebellar system. This is an area that is necessary for coordinated motor movement, and provides the cerebellum with its principal input. So the question is, once these pontine neurons are formed, how do they “know” they have to travel to the precerebellar region?

The researchers who conducted this study found Hox genes to be the guide that leads the neurons to their appropriate resting place. A specific Hox gene, Hoxa2, was found to influence neuronal migration, preventing them from going astray through the influence of a pathway of molecular signaling. The Hoxa2 gene regulates the expression of a particular receptor, known as Robo. The receptor binds to a chemical called Slit, which prevents the neurons from being drawn toward other chemoattracants. This allows the neurons to ignore outside influences and to travel directly to the precerebellar region, where they belong. When the scientists knocked out the Hoxa2 gene, the pontine neurons were unable to resist being drawn to chemoattractants and often didn’t reach their final destination.

This adds some insight into the process of neuronal migration, something that has been problematic to neuroscientists for years. It is just the beginning of the story, however. Not all of the neurons reacted to Hoxa2, suggesting there may be other Hox genes involved in brain development. Thus, scientists will continue to search for other Hox genes that are part of the process. The success of this study, however, at least provides an indication that Hox genes, some of the most highly conserved in our bodies, may also be responsible for some of the most important aspects of brain development.


Geisen, M.J., Meglio, T.D., Pasqualetti, M., Ducret, S., Brunet, J., Chedotal, A., Rijli, F.M., Zoghbi, H.Y. (2008). Hox Paralog Group 2 Genes Control the Migration of Mouse Pontine Neurons through Slit-Robo Signaling. PLoS Biology, 6 (6), e142. DOI:10.1371/journal.pbio.0060142