phlogiston for beginners

Say “learning styles” to some teachers and you’re likely to get your head bitten off. Tom Bennett, the government’s behaviour tsar/guru/expert/advisor, really, really doesn’t like the idea of learning styles as he has made clear in a series of blogposts exploring the metaphor of the zombie.

I’ve come in for a bit of flak from various sources for suggesting that Bennett might have rather over-egged the learning styles pudding. I’ve been accused of not accepting the evidence, not admitting when I’m wrong, advancing neuromyths, being a learning styles advocate, being a closet learning styles advocate, and by implication not caring about the chiiiiiiiildren and being responsible for a metaphorical invasion by the undead. I refute all those accusations.

I’m still trying to figure out why learning styles have caused quite so much fuss. I understand that teachers might be a bit miffed about being told by schools to label children as visual, auditory or kinaesthetic (VAK) learners only to find there’s no evidence that they can be validly categorised in that way. But the time and money wasted on learning styles surely pales into insignificance next to the amounts squandered on the industry that’s sprung up around some questionable assessment methods, an SEN system that a Commons Select Committee pronounced not fit for purpose, or a teacher training system that for generations has failed to equip teachers with the skills they need to evaluate popular wheezes like VAK and brain gym.

And how many children have suffered actual harm as a result of being given a learning style label? I’m guessing very few compared to the number whose life has been blighted by failing the 11+, being labelled ‘educationally subnormal’, or more recent forms of failure to meet the often arbitrary requirements of the education system.  What is it about learning styles?

the learning styles neuromyth

I made the mistake of questioning some of the assumptions implicit in this article, notably that the concept of learning styles is a false belief, that it’s therefore a neuromyth and is somehow harmful in that it raises false hopes about transforming society.

My suggestion that the evidence for the learning styles concept is mixed rather than non-existent, that there are some issues around the idea of the neuromyth that need to be addressed, and that the VAK idea, even if wrong, probably isn’t the biggest hole in the education system’s bucket, was taken as a sign that my understanding of the scientific method must be flawed.

the evidence for aliens

One teacher (no names, no pack drill) said “This is like saying the ‘evidence for aliens is mixed’”.  No it isn’t. There are so many planets in the universe it’s highly unlikely Earth is the only one supporting life-forms, but so far, we have next to no evidence of their existence. But a learning style isn’t a life-form, it’s a construct, a label for phenomena that researchers have observed, and a pretty woolly label at that. It could refer to a wide range of very different phenomena, some of which are really out there, some of which are experimental artifacts, and some of which might be figments of a researchers’ imagination. It’s pointless speculating about whether learning styles exist or not because whether they exist or not depends on what you label as a ‘learning style’.  Life-forms are a different kettle of fish; there’s some debate around what constitutes a life-form and what doesn’t, but it’s far more tightly specified than any learning style ever has been.

you haven’t read everything

I was then chided for pointing out that Tom Bennett said he hadn’t finished reading the Coffield Learning Styles Review when (obviously) I hadn’t read everything there was to read on the subject either.   But I hadn’t  complained that Tom hadn’t read everything; I was pointing out that by his own admission in his book Teacher Proof he’d stopped reading before he got to the bit in the Coffield review which discusses learning styles models found to have validity and reliability, so it’s not surprising he came to a conclusion that Coffield didn’t support.

my evidence weighs more than your evidence

Then, “I’ve seen the tiny, tiny evidence you cite to support LS. Dwarfed by oceans of ‘no evidence’. There’s more evidence for ET than LS”. That’s not how the evaluation of scientific evidence works. It isn’t a case of putting the ‘for’ evidence in one pan of the scales and the ‘against’ evidence in the other and the heaviest evidence wins. On that basis, the heliocentric theories of Copernicus and Kepler would have never seen the light of day.
how about homeopathy?

Finally “How about homeopathy? Mixed evidence from studies.”   The implication is that if I’m not dismissing learning styles because the evidence is mixed, then I can’t dismiss homeopathy. Again the analogy doesn’t hold. Research shows that there is an effect associated with homeopathic treatments – something happens in some cases. But the theory of homeopathy doesn’t make sense in the context of what we know about biology, chemistry and physics. This suggests that the problem lies in the explanation for the effect, not the effect itself. But the concept of learning styles doesn’t conflict with what we know about the way people learn. It’s quite possible that people do have stable traits when it comes to learning. Whether or not they do, and if they do what those traits are is another matter.

Concluding from complex and variable evidence that learning styles don’t exist, and that not dismissing them out of hand is akin to believing in aliens and homeopathy, looks to me suspiciously like saying  “Phlogiston? Pfft! All that stuff about iron filings increasing in weight when they combust is a load of hooey.”

the myth of the neuromyth

In 1999, at the end of the ‘decade of the brain’ the Museum of Life in Rio de Janeiro was planning a series of events aimed at enhancing the general public’s understanding of brain research. As part of the planning process, a survey was undertaken to find out what the population of Rio de Janeiro, especially students, actually understood about the brain. The findings are set out in a paper by Suzana Herculano-Houzel entitled:

Do you know your brain? A survey on public neuroscience literacy at the closing of the decade of the brain”.

Respondents were asked about their opinion (yes, no, or don’t know; Y/N/DK) as to whether each of 95 statements about the brain was correct or not. 83 statements were directly related to brain research and 12 indirectly related. The general public scored around 50% correct responses to the items. Not surprisingly, the percentage of correct scores increased with number of years in education, and to some extent, with the amount of reading respondents did – of books, science magazines and newspapers.

The statements in the survey were, of necessity, short. Examples include;

  • “we use our brains 24 hours a day”
  • “when a brain region is damaged and dies, other parts of the brain can take up its function” and
  • “we usually utilize only 10% of our brain”.

A core problem with condensing complex, uncertain or contentious research findings into single sentence assertions is that the research findings often can’t be accurately summarised in short statements. Herculano-Houzel addressed this problem by asking neuroscientists to respond to the survey. 35 replied. She took 70% agreement amongst them as the threshold for determining the correctness of the assertions. 56 items met this criterion.

The neuroscience literacy of trainee teachers

A decade later, some items from the Herculano-Houzel survey were used by Paul Howard-Jones and colleagues at the Graduate School of Education at the University of Bristol, England, to explore the neuroscience literacy of trainee teachers. As the authors point out, there is considerable concern about the prevalence of ‘neuromyths’ in education – citing an OECD report published in 2002 that defined a neuromyth as a “misconception generated by a misunderstanding, a misreading or a misquoting of facts scientifically established”. (A later volume lists some common neuromyths). Howard-Jones et al cite the Visual, Auditory and Kinaesthetic (VAK) Learning Style, left-brain/right-brain learning preferences, brain gym models and common perceptions of the effect of water, sugar and omega-3 oils in learning, as examples.

The authors gathered responses from 158 graduate trainee teachers coming to the end of a PGCE course, to 38 assertions – 15 correct, 16 incorrect, and 7 open to subjective opinion. 16 of the assertions were adapted from the Herculano-Houzel study and the remainder derived from concepts identified in preliminary interviews and previous research by the authors. Participants were asked whether the assertions reflected their opinions and to respond Y/N/DK. At first glance, the two studies look alike, and indeed Howard-Jones et al’s trainee teachers’ responses were broadly comparable to those of Herculano-Houzel’s graduates. But there are some important differences between the two that need to be borne in mind in respect of Howard-Jones et al’s conclusions.


As I see it, there are two sources of ambiguity in the Herculano-Houzel survey. One is the challenge of condensing research findings accurately into a single sentence assertion. Herculano-Houzel addressed this by noting the level of agreement amongst neuroscientists. The other is that Y/N/DK responses can fail to represent the degree of respondents’ agreement with single sentence assertions that represent complex, uncertain or contentious research findings. Respondents might only slightly agree or disagree with a statement, or might reply ‘don’t know’ meaning;

  • I don’t know’
  • Scientists don’t know’
  • ‘Neither yes nor no is accurate’ or
  • ‘I know what my opinion is but I don’t know whether there’s scientific evidence to support it’.

This ambiguity in responses didn’t matter too much in the Brazilian study because its purpose was to get a broad overview of what the public knew or didn’t know about the brain. The public understanding of the brain is not unimportant, but it’s not as important as how teachers understand the brain, since a teacher could, directly or indirectly, pass on a misunderstanding about brain function to literally thousands of students.

Howard-Jones et al discuss in some detail the possibility that respondents might have interpreted assertions in different ways, or that there might have been differences in understanding behind the responses. I think this could have been addressed by designing the questionnaire differently, because restricting teachers’ responses to Y/N/DK might not produce a sufficiently accurate picture of what teachers know or don’t know.

For example, a graduate who’d studied neuroscience might be aware of exceptions to an assertion that was broadly true, or might interpret the wording of the assertion differently to an arts graduate who knew very little about biology. Asking the trainee teachers to use a scale to express their degree of agreement with the statements would have been one solution. They could also have been asked to indicate how much they knew about the relevant scientific evidence.

Neuromyths in education: Prevalence and predictors of misconceptions among teachers

Earlier this year, a paper entitled “Neuromyths in education: Prevalence and predictors of misconceptions among teachers” describing a similar study was published by Dekker et al, with Howard-Jones as a co-author. This study looked at the prevalence and predictors of neuromyths amongst teachers in the UK and the Netherlands. The survey contained 32 statements about the brain and its influence on learning. 15 were ‘educational neuromyths’ derived from the 2002 OECD publication and the Howard-Jones study, and the other 17 were “general assertions about the brain”.

Respondents were asked to say whether the statements were correct, incorrect or they didn’t know (C/I/DK). Dekker et al found a slightly higher level (70%) of ‘correct’ responses to general assertions about the brain than previous studies had found amongst graduates, and a higher level of ‘correct’ perceptions of neuromyth statements (51%) than Howard-Jones et al (34%). What Dekker et al also found was that, contrary to the previous studies, a greater general knowledge about the brain did not protect teachers from believing neuromyths.

This finding is not only counterintuitive, but it runs counter to the findings of the previous studies on which the Dekker et al study was based. If Dekker et al had used the same questionnaire as Herculano-Houzel, their findings would raise some interesting questions. Were the differences due to cultural, geographical or linguistic factors? Or was this a finding peculiar to teachers? But the Dekker et al questionnaire wasn’t identical to the Herculano-Houzel one. This suggests that the questionnaire itself could have contributed to the counterintuitive findings. Two obvious differences between Dekker et al and the previous studies are the ways ambiguity was tackled in relation to the statements and responses.

what is a neuromyth?
The different researchers addressed statement ambiguity in different ways: Herculano-Houzel measured agreement on the assertions amongst neuroscientists; Howard-Jones et al discussed in some detail the possible variations in interpretation of specific statements – the OECD chapter from which some of Dekker et al’s neuromyths were derived explores them in some depth. But I could find no indication in the Dekker et al paper that ambiguity of the statements or of the responses had been addressed at all. Nor could I find an explanation as to how the wording of the statements had been chosen.

Most of the statements that Dekker et al derived from Herculano-Houzel scored high levels of agreement amongst neuroscientists. But there were some exceptions, for example;

  • The ‘correct’ statement (14) “when a brain region is damaged other parts of the brain can take up its function” scored just on the 70% threshold (24% of neuroscientists disagreed and 6% didn’t know)
  • The ‘incorrect’ statement (7) “we only use 10% of our brain” scored below the agreement threshold, with only 68% of neuroscientists disagreeing with it (6% agreed and 26% didn’t know).
  • In addition, the wording of the statements was changed between surveys; Herculano-Houzel has “when a brain region is damaged and dies, other parts of the brain can take up its function” and  “we usually utilize only 10% of our brain”, respectively.

We don’t know whether the variation in neuroscientists’ levels of agreement resulted from debatable research findings or because of differences in interpretation of the wording. If the latter, it’s possible that the Dekker et al results were affected by respondents’ interpretations.

Some of Dekker et al’s general statements are open to interpretation too. Item 3 “boys have bigger brains than girls” is true if you compare the means of brain size for boys and girls of the same age. However, the distributions of individual measures overlap, which means that not all boys have bigger brains than girls of the same age, as you can see from the graphs below, taken from Lenroot et al (2007).


Scatterplot of longitudinal measurements of total brain volume for males (N = 475 scans, shown in dark blue) and females (N = 354 scans, shown in red).


Gray matter subdivisions. (a) Frontal lobe, (b) Parietal lobe, (c) Temporal lobe, (d) Occipital lobe

Then there’s item 12, which says “there are critical periods in childhood after which certain things can no longer be learned”. The research suggests that there are indeed critical periods for some sensory functions – children with certain eye defects corrected after a certain age never develop normal vision, and children deprived of early language input have failed to develop normal speech. This implies that whether the statement is ‘correct’ or not depends on what is meant by ‘certain things’ and ‘learned’. Then take item 14 which claims the statement “learning is not due to the addition of new cells to the brain” is ‘incorrect’. This assertion doesn’t appear to be incorrect for the hippocampus. Admittedly much of the relevant research has taken place since this item appeared in the Herculano-Houzel survey, but findings had been around for a decade before the Dekker et al study and was a point raised by Howard-Jones et al.

In addition, some statements differed only in respect of some fairly fine-grained distinctions. Item 15 says “individuals learn better when they receive information in their preferred learning style (e.g., auditory, visual, kinesthetic)” and is deemed ‘incorrect’. But item 27 “individual learners show preferences for the mode in which they receive information (e.g., visual, auditory, kinesthetic)” is deemed ‘correct’.

Both items distinguish generic preferred learning styles (mine happens to consist of reading new material whilst propped up in bed, followed by mulling it over while I go for a walk), from a specific Learning Styles model derived from Neuro-Linguistic Programming theory involving three named sensory domains. Respondents who are aware of criticisms of the VAK Learning Styles model might justifiably question whether individual learners actually do show preferences for the mode in which they receive information; what about people who learn best from tv documentaries for example? Audio-visual communication is itself a mode of information transmission, but it involves two sensory modalities. And what about constraints imposed by the learning objective itself? Most people would prefer to learn to drive or swim by receiving information kinesthetically, whatever their usual preferences, because it’s extremely difficult to learn to do either using only visual and/or auditory modalities.

The upshot is that at least 7 of Dekker et al’s 32 statements contain quite high levels of ambiguity, either due to the nature of the relevant research findings, or to the wording of the assertions. It’s quite feasible that Dekker et al’s counterintuitive finding that general knowledge about the brain didn’t protect teachers against believing neuromyths, might actually be an experimental artefact.

neuromyths: correct or incorrect, true or false?

The second difference was in the way response ambiguity was dealt with. Herculano-Houzel and Howard-Jones et al used subjective agreement (Y/N/DK). Dekker et al used objective ‘correctness’ (C/I/DK) – which isn’t the same thing.

I came across the Dekker et al study via Kevin Wheldall’s blog Notes from Harefield. When responding to my comments about ambiguity in survey items, he noted that the Dekker et al statements were presented as an online quiz on Leah Tomlin’s Education Elf blog. The quiz differs from the Dekker et al survey in that it doesn’t have a ‘don’t know’ option. In other words, in the quiz itself there’s no acknowledgement of any possible ambiguity in the assertions – although several people who have completed it have commented on ambiguities in the statements. The Education Elf discusses the study in more detail here.

Following the trail of these studies has been a fascinating demonstration of what this blog is named after – logical incrementalism. The research questions have shifted from the degree of ‘neuroscience literacy’ of the public to the prevalence of ‘neuromyths’ amongst teachers. The measure of the ‘correctness’ of statements changed from degree of agreement on a 100 point scale amongst neuroscientists, to statements being categorized as either ‘correct’ or incorrect’ with no explanation of the criteria for that categorization, or, if one includes the Education Elf blog survey, categorized as ‘true’ or ‘false’ with no explanation, despite an extensive discussion in the literature of the nature of the misconceptions, misunderstandings, misreadings and misquotings involved and respondents drawing attention to ambiguities that might have affected their responses.

There are obvious advantages in re-using survey items developed in previous studies. Many methodological issues would have been addressed in the initial survey design, and any residual weaknesses would have become apparent from the results. However there are risks involved in making incremental changes to previous questionnaires unless attention is paid to the parameters that guided their development. In this case, the criterion for ‘correctness’ has been largely overlooked, as has the ambiguity that’s inevitably an outcome of asking for Y/N/DK responses.

There’s no question that misconceptions, misunderstandings, misreadings and misquotings of the neuroscience literature have contributed to the prevalence of neuromyths amongst the general public and amongst teachers. Teachers might indeed be especially susceptible because findings from neuroscience are directly applicable to their work and because many who haven’t studied biological sciences are likely to rely on simplified sources for information about the brain.

Having said that, I’d suggest that labelling complex, uncertain or contentious research findings as either correct or incorrect, true or false, facts or myths, is what what got us into this mess in the first place. Clearly teachers need more, and better, information about the brain, but some basic biology might prove more useful than putting a tick or cross next to oversimplified ideas.


Dekker, S, Lee, NC, Howard-Jones, P & Jolles, J (2012). Neuromyths in Education: Prevalence and Predictors of Misconceptions among Teachers. Frontiers in Psychology, 3, 429.

Herculano-Houzel, S (2002). Do you know your brain? A survey on public neuroscience literacy at the closing of the decade of the brain. Neuroscientist 8, 98–110.

Howard-Jones, P, Franey, Mashmoushi, R & Liao, YC (2009). The neuroscience literacy of trainee teachers. Paper presented at British Educational Research Association Annual Conference, Manchester.

Lenroot, RK, Gogtay, N, Greenstein, DK, Molloy, E, Wallace, GL, Clasen, LS, Blumenthal JD, Lerch, J, Zijdenbos, AP, Evans, AC, Thompson, PM & Giedd, JN (2007). Sexual dimorphism of brain developmental trajectories during childhood and adolescence. NeuroImage 36, 1065–1073.

Organisation for Economic Co-operation and Development (2002). Understanding the brain: Towards a new learning science. Paris: OECD.

Organisation for Economic Co-operation and Development (2007). Understanding the Brain:The birth of a learning science. Paris: OECD.

Edited for clarity 3/6/15 and 13/2/18.