the myth of the neuromyth

In 1999, at the end of the ‘decade of the brain’ the Museum of Life in Rio de Janeiro was planning a series of events aimed at enhancing the general public’s understanding of brain research. As part of the planning process, a survey was undertaken to find out what the population of Rio de Janeiro, especially students, actually understood about the brain. The findings are set out in a paper by Suzana Herculano-Houzel entitled:

Do you know your brain? A survey on public neuroscience literacy at the closing of the decade of the brain”.

Respondents were asked about their opinion (yes, no or don’t know; Y/N/DK) as to whether each of 95 statements about the brain was correct or not. 83 statements were directly related to brain research and 12 indirectly related. The general public scored around 50% correct responses to the items. Not surprisingly, the percentage of correct scores increased with number of years in education, and to some extent, with the amount of reading respondents did – of books, science magazines and newspapers.

The statements in the survey were, of necessity, short. Examples include;

  • “we use our brains 24 hours a day”
  • “when a brain region is damaged and dies, other parts of the brain can take up its function” and
  • “we usually utilize only 10% of our brain”.

A core problem with condensing complex, uncertain or contentious research findings into single sentence assertions is that the research findings often can’t be accurately summarised in short statements. Herculano-Houzel addressed this problem by asking neuroscientists to respond to the survey. 35 replied. She took 70% agreement amongst them as the threshold for determining the correctness of the assertions. 56 items met this criterion.

The neuroscience literacy of trainee teachers

A decade later, some items from the Herculano-Houzel survey were used by Paul Howard-Jones and colleagues at the Graduate School of Education at the University of Bristol, England, to explore the neuroscience literacy of trainee teachers. As the authors point out, there is considerable concern about the prevalence of ‘neuromyths’ in education – citing an OECD report published in 2002 that defined a neuromyth as a “misconception generated by a misunderstanding, a misreading or a misquoting of facts scientifically established”. (A later volume lists some common neuromyths). Howard-Jones et al cite the Visual, Auditory and Kinaesthetic (VAK) Learning Style, left-brain/right-brain learning preferences and brain gym models and common perceptions of the effect of water, sugar and omega-3 oils in learning, as examples.

The authors gathered responses from 158 graduate trainee teachers coming to the end of a PGCE course, to 38 assertions – 15 correct, 16 incorrect, and 7 open to subjective opinion. 16 of the assertions were adapted from the Herculano-Houzel study and the remainder derived from concepts identified in preliminary interviews and previous research by the authors. Participants were asked whether the assertions reflected their opinions and to respond Y/N/DK. At first glance, the two studies look alike, and indeed Howard-Jones et al’s trainee teachers’ responses were broadly comparable to those of Herculano-Houzel’s graduates. But there are some important differences between the two that need to be borne in mind in respect of Howard-Jones et al’s conclusions.

ambiguity

As I see it, there are two sources of ambiguity in the Herculano-Houzel survey. One is the challenge of condensing research findings accurately into a single sentence assertion. Herculano-Houzel addressed this by noting the level of agreement amongst neuroscientists. The other is that Y/N/DK responses can fail to represent the degree of respondents’ agreement with single sentence assertions that represent complex, uncertain or contentious research findings. Respondents might only slightly agree or disagree with a statement, or might reply ‘don’t know’ meaning;

  • I don’t know’
  • Scientists don’t know’
  • ‘Neither yes nor no is accurate’ or
  • ‘I know what my opinion is but I don’t know whether there’s scientific evidence to support it’.

 

This ambiguity in responses didn’t matter too much in the Brazilian study because its purpose was to get a broad overview of what the public knew or didn’t know about the brain. The public understanding of the brain is not unimportant, but it’s not as important as how teachers understand the brain, since a teacher could, directly or indirectly, pass on a misunderstanding about brain function to literally thousands of students.

Howard-Jones et al discuss in some detail the possibility that respondents might have interpreted assertions in different ways, or that there might have been differences in understanding behind the responses. I think this could have been addressed by designing the questionnaire differently, because restricting teachers’ responses to Y/N/DK might not produce a sufficiently accurate picture of what teachers know or don’t know.

For example, a graduate who’d studied neuroscience might be aware of exceptions to an assertion that was broadly true, or might interpret the wording of the assertion differently to an arts graduate who knew very little about biology. Asking the trainee teachers to use a scale to express their degree of agreement with the statements would have been one solution. They could also have been asked to indicate how much they knew about the relevant scientific evidence.

Neuromyths in education: Prevalence and predictors of misconceptions among teachers

Earlier this year, a paper entitled “Neuromyths in education: Prevalence and predictors of misconceptions among teachers” describing a similar study was published by Dekker et al, with Howard-Jones as a co-author. This study looked at the prevalence and predictors of neuromyths amongst teachers in the UK and the Netherlands. The survey contained 32 statements about the brain and its influence on learning. 15 were ‘educational neuromyths’ derived from the 2002 OECD publication and the Howard-Jones study, and the other 17 were ‘general assertions about the brain’.

Respondents were asked to say whether the statements were correct, incorrect or they didn’t know (C/I/DK). Dekker et al found a slightly higher level (70%) of ‘correct’ responses to general assertions about the brain than previous studies had found amongst graduates, and a higher level of correct perceptions of neuromyth statements (51%) than Howard-Jones et al (34%). What Dekker et al also found was that, contrary to the previous studies, a greater general knowledge about the brain did not protect teachers from believing neuromyths.

This finding is not only counterintuitive but it runs counter to the findings of the previous studies on which the Dekker et al study was based. If Dekker et al had used the same questionnaire as Herculano-Houzel, their findings would raise some interesting questions. Were the differences due to cultural, geographical or linguistic factors? Or was this a finding peculiar to teachers? But the Dekker et al questionnaire wasn’t identical to the Herculano-Houzel one. This suggests that the questionnaire itself could have contributed to the counterintuitive findings. Two obvious differences between Dekker et al and the previous studies are the ways ambiguity was tackled in relation to the statements and responses.

what is a neuromyth?
The researchers addressed statement ambiguity in different ways: Herculano-Houzel measured agreement on the assertions amongst neuroscientists; Howard-Jones et al discussed in some detail the possible variations in interpretation of specific statements; the OECD chapter from which some of Dekker et al’s neuromyths were derived explores them in some depth. But I could find no indication in the Dekker et al paper that ambiguity of the statements or of the responses had been addressed. Nor could I find explanation as to how the wording of the statements had been chosen.

Most of the statements that Dekker et al derived from Herculano-Houzel scored high levels of agreement amongst neuroscientists. But there were some exceptions, for example;

  • The ‘correct’ statement (14) “when a brain region is damaged other parts of the brain can take up its function” scored just on the 70% threshold (24% of neuroscientists disagreed and 6% didn’t know)
  • The ‘incorrect’ statement (7) “we only use 10% of our brain” scored below the agreement threshold, with only 68% of neuroscientists disagreeing with it (6% agreed and 26% didn’t know).
  • In addition, the wording of the statements was changed between surveys; Herculano-Houzel has “when a brain region is damaged and dies, other parts of the brain can take up its function” and  “we usually utilize only 10% of our brain”, respectively.

We don’t know whether the variation in neuroscientists’ levels of agreement resulted from debatable research findings or because of differences in interpretation of the wording. If the latter, it’s possible that the Dekker et al results were affected by respondents’ interpretations.

Some of Dekker et al’s general statements are open to interpretation too. Item 3 “boys have bigger brains than girls” is true if you compare the means of brain size for boys and girls of the same age. However, the distributions of individual measures overlap, which means that not all boys have bigger brains than girls of the same age, as you can see from the graphs below, taken from Lenroot et al (2007).

Scatterplot of longitudinal measurements of total brain volume for
males (N=475 scans, shown in dark blue) and females (N=354 scans,
shown in red).

Mean volume by age in years for males (N=475 scans) and females (N=354 scans). Middle lines in each set of three lines represent mean values, and
upper and lower lines represent upper and lower 95% confidence intervals. (a) total brain volume

Then there’s item 12, which says “there are critical periods in childhood after which certain things can no longer be learned”. The research suggests that there are indeed critical periods for some sensory functions – children with certain eye defects corrected after a certain age never develop normal vision, and children deprived of early language input have failed to develop normal speech. This implies that whether the statement is ‘correct’ or not depends on what is meant by ‘certain things’ and ‘learned’. Then take item 14 which claims the statement “learning is not due to the addition of new cells to the brain” is ‘incorrect’. This assertion doesn’t appear to be incorrect for the hippocampus. Admittedly much of the relevant research has taken place since this item appeared in the Herculano-Houzel survey, but findings had been around for a decade before the Dekker et al study and was a point raised by Howard-Jones et al.

In addition, some statements differed only in respect of some fairly fine-grained distinctions. Item 15 says “individuals learn better when they receive information in their preferred learning style (e.g., auditory, visual, kinesthetic)” and is deemed ‘incorrect’. But item 27 “individual learners show preferences for the mode in which they receive information (e.g., visual, auditory, kinesthetic)” is deemed ‘correct’.

Both items distinguish generic preferred learning styles (mine happens to consist of reading new material whilst propped up in bed, followed by mulling it over while I go for a walk) from a specific Learning Styles model derived from Neuro-Linguistic Programming theory involving three named sensory domains. Respondents who are aware of criticisms of the VAK Learning Styles model might justifiably question whether individual learners actually do show preferences for the mode in which they receive information; what about people who learn best from tv documentaries for example? Audio-visual communication is itself a mode of information transmission, but it involves two sensory modalities. And what about constraints imposed by the learning objective itself? Most people would prefer to learn to drive or swim by receiving information kinesthetically, whatever their usual preferences, because it’s extremely difficult to learn to do either using only visual and/or auditory modalities.

The upshot is that at least 7 of Dekker et al’s 32 statements contain quite high levels of ambiguity, either due to the nature of the relevant research findings, or to the wording of the assertions. It’s quite feasible that Dekker et al’s counterintuitive finding that general knowledge about the brain didn’t protect teachers against believing neuromyths, might actually be an experimental artifact.

neuromyths: correct or incorrect, true or false?

The second difference was in the way response ambiguity was dealt with. Herculano-Houzel and Howard-Jones et al used subjective agreement (Y/N/DK). Dekker et al used objective ‘correctness’ (C/I/DK) – which isn’t the same thing.

I came across the Dekker et al study via Kevin Wheldall’s blog Notes from Harefield. When responding to my comments about ambiguity in survey items, he noted that the Dekker et al statements were presented as an online quiz on Leah Tomlin’s Education Elf blog. The quiz differs from the Dekker et al survey in that a ‘don’t know’ response isn’t an option. In other words, in the quiz itself there’s no acknowledgement of any possible ambiguity in the assertions – although several people who have completed it have commented on ambiguities in the statements. The Education Elf discusses the study in more detail here.

Following the trail of these studies has been a fascinating demonstration of what this blog is named after – logical incrementalism. The research questions have shifted from the degree of ‘neuroscience literacy’ of the public to the prevalence of ‘neuromyths’ amongst teachers. The measure of the ‘correctness’ of statements changed from degree of agreement on a 100 point scale amongst neuroscientists, to statements being categorized as either ‘correct’ or incorrect’ with no explanation of the criteria for that categorization, or, if one includes the Education Elf blog survey, categorized as ‘true’ or ‘false’ with no explanation, despite an extensive discussion in the literature of the nature of the misconceptions, misunderstandings, misreadings and misquotings involved and respondents drawing attention to ambiguities that might have affected their responses.

There are obvious advantages in re-using survey items developed in previous studies. Many methodological issues would have been addressed in the initial survey design and any residual weaknesses would have become apparent from the results. However there are risks involved in making incremental changes to previous questionnaires unless attention is paid to the parameters that guided their development. In this case, the criterion for ‘correctness’ has been largely overlooked, as has the ambiguity that’s inevitably an outcome of asking for Y/N/DK responses.

There’s no question that misconceptions, misunderstandings, misreadings and misquotings of the neuroscience literature have contributed to the prevalence of neuromyths amongst the general public and amongst teachers. Teachers might indeed be especially susceptible because findings from neuroscience are directly applicable to their work and because many who haven’t studied biological sciences are likely to rely on simplified sources for information about the brain.

Having said that, I’d suggest that labelling complex, uncertain or contentious research findings as either correct or incorrect, true or false, facts or myths, is what what got us into this mess in the first place. Clearly teachers need more, and better, information about the brain, but some basic biology might prove more useful than putting a tick or cross next to oversimplified ideas.

References

Dekker, S, Lee, NC, Howard-Jones, P & Jolles, J (2012). Neuromyths in Education: Prevalence and Predictors of Misconceptions among Teachers. Frontiers in Psychology, 3, 429.

Herculano-Houzel, S (2002). Do you know your brain? A survey on public neuroscience literacy at the closing of the decade of the brain. Neuroscientist 8, 98–110.

Howard-Jones, P, Franey, Mashmoushi, R & Liao, YC (2009). The neuroscience literacy of trainee teachers. Paper presented at British Educational Research Association Annual Conference, Manchester.

Lenroot, RK, Gogtay, N, Greenstein, DK, Molloy, E, Wallace, GL, Clasen, LS, Blumenthal JD, Lerch, J, Zijdenbos, AP, Evans, AC, Thompson, PM & Giedd, JN (2007). Sexual dimorphism of brain developmental trajectories during childhood and adolescence. NeuroImage 36, 1065–1073.

Organisation for Economic Co-operation and Development (2002). Understanding the brain: Towards a new learning science. Paris: OECD.

Organisation for Economic Co-operation and Development (2007). Understanding the Brain:The birth of a learning science. Paris: OECD.

Edited for clarity 3/6/15.

Advertisements

One thought on “the myth of the neuromyth

  1. In my opinion this is a really balanced blogpost. It puts across the point that we must constantly question all research out there, especially that which becomes accepted fact through popularity on social media. Whether it is myths that arise through misrepresentation because of people’s confirmation bias or misunderstanding due to lack of knowledge both are equally damaging. Just because it’s published research doesn’t mean it’s not without it’s faults. All to often people blindly accept opinion as fact without “following the trail of research”. This post is a breath of fresh air in the ongoing discussion of ‘neuromyths’.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s