magic beans, magic bullets and crypto-pathologies

In the previous post, I took issue with a TES article that opened with fidget-spinners and closed with describing dyslexia and ADHD as ‘crypto-pathologies’. Presumably as an analogy with cryptozoology – the study of animals that exist only in folklore. But dyslexia and ADHD are not the equivalent of bigfoot and unicorns.

To understand why, you have to unpack what’s involved in diagnosis.

diagnosis, diagnosis, diagnosis

Accurate diagnosis of health problems has always been a challenge because:

  • Some disorders* are difficult to diagnose. A broken femur, Bell’s palsy or measles are easier to figure out than hypothyroidism, inflammatory bowel disease or Alzheimer’s.
  • It’s often not clear what’s causing the disorder. Fortunately, you don’t have to know the immediate or root causes for successful treatment to be possible. Doctors have made the reasonable assumption that patients presenting with the same signs and symptoms§ are likely to have the same disorder.

Unfortunately, listing the signs and symptoms isn’t foolproof because;

  • some disorders produce different signs and symptoms in different patients
  • different disorders can have very similar signs and symptoms.

some of these disorders are not like the others…

To complicate the picture even further, some signs and symptoms are qualitatively different from the aches, pains, rashes or lumps that indicate disorders obviously located in the body;  they involve thoughts, feelings and behaviours instead. Traditionally, human beings have been assumed to consist of a physical body and non-physical parts such as mind and spirit, which is why disorders of thoughts, feelings and behaviours were originally – and still are – described as mental disorders.

Doctors have always been aware that mind can affect body and vice versa. They’ve also long known that brain damage and disease can affect thoughts, feelings, behaviours and physical health. In the early 19th century, mental disorders were usually identified by key symptoms. The problem was that the symptoms of different disorders often overlapped. A German psychiatrist, Emil Kraepelin, proposed instead classifying mental disorders according to syndromes, or patterns of co-occurring signs and symptoms. Kraepelin hoped this approach would pave the way for finding the biological causes of disorders. (In 1906, Alois Alzheimer found the plaques that caused the dementia named after him, while he was working in Kraepelin’s lab.)

Kraepelin’s approach laid the foundations for two widely used modern classification systems for mental disorders; the Diagnostic and Statistical Manual of Mental Disorders, published by the American Psychiatric Association, currently in its 5th edition (DSM V), and the International Classification of Diseases Classification of Mental and Behavioural Disorders published by the World Health Organisation, currently in its 10th edition (ICD-10).

Kraepelin’s hopes for his classification system have yet to be realised. That’s mainly because the brain is a difficult organ to study. You can’t poke around in it without putting your patient at risk. It’s only in the last few decades that scanning techniques have enabled researchers to look more closely at the structure and function of the brain, and the scans require interpretation –  brain imaging is still in its infancy.

you say medical, I say experiential

Kraepelin’s assumptions about distinctive patterns of signs and symptoms, and about their biological origins, were reasonable ones. His ideas, however, were almost the polar opposite to those of his famous contemporary, Sigmund Freud, who located the root causes of mental disorders in childhood experience. The debate has raged ever since. The dispute is due to the plasticity of the brain.  Brains change in structure and function over time and several factors contribute to the changes;

  • genes – determine underlying structure and function
  • physical environment e.g. biochemistry, nutrients, toxins – affects structure and function
  • experience – the brain processes information, and information changes the brain’s physical structure and biochemical function.

On one side of the debate is the medical model; in essence, it assumes that the causes of mental disorders are primarily biological, often due to a ‘chemical imbalance’. There’s evidence to support this view; medication can improve a patient’s symptoms. The problem with the medical model is that it tends to assume;

  • a ‘norm’ for human thought, feelings and behaviours – disorders are seen as departures from that norm
  • the cause of mental disorders is biochemical and the chemical ‘imbalance’ is identified (or not) through trial-and-error – errors can be catastrophic for the patient.
  • the cause is located in the individual.

On the other side of the debate is what I’ll call the experiential model (often referred to as anti-psychiatry or critical psychiatry). In essence it assumes the causes of unwanted thoughts, feelings or behaviours are primarily experiential, often due to adverse experiences in childhood. The problem with that model is that it tends to assume;

  • the root causes are experiential and not biochemical
  • the causes are due to the individual’s response to adverse experiences
  • first-hand reports of early adverse experiences are always reliable, which they’re not.


Kraepelin’s classification system wasn’t definitive – it couldn’t be, because no one knew what was causing the disorders. But it offered the best chance of identifying distinct mental health problems – and thence their causes and treatments. The disorders identified in Kraepelin’s system, the DSM and ICD, were – and most still are – merely labels given to clusters of co-occurring signs and symptoms.  People showing a particular cluster are likely to share the same underlying biological causes, but that doesn’t mean they do share the same underlying causes or that the origin of the disorder is biological.

This is especially true for signs and symptoms that could have many causes. There could be any number of reasons for someone hallucinating, withdrawing, feeling depressed or anxious – or having difficulty learning to read or maintain attention.  They might not have a medical ‘disorder’ as such. But you wouldn’t know that to read through the disorders listed in the DSM or ICD. They all look like bona fide, well-established medical conditions, not like labels for bunches of symptoms that sometimes co-occur and sometimes don’t, and that have a tendency to appear or disappear with each new edition of the classification system.  That brings us to the so-called ‘crypto-pathologies’ referred to in the TES article.

Originally, terms like dyslexia were convenient and legitimate shorthand labels for specific clusters of signs or symptoms. Dyslexia means difficulty with reading, as distinct from alexia which means not being able to read at all; both problems can result from stroke or brain damage. Similarly, autism was originally a shorthand term for the withdrawn state that was one of the signs of schizophrenia – itself a label.  Delusional parasitosis is also a descriptive label (the parasites being what’s delusional, not the itching).


What’s happened over time is that many of these labels have become reified – they’ve transformed from mere labels into disorders widely perceived as having an existence independent of the label. Note that I’m not saying the signs and symptoms don’t exist. There are definitely children who struggle with reading regardless of how they’ve been taught; with social interaction regardless of how they’ve been brought up; and with maintaining focus regardless of their environment. What I am saying is that there might be different causes, or multiple causes, for clusters of very similar signs and symptoms.  Similar signs and symptoms don’t mean that everybody manifesting those signs and symptoms has the same underlying medical disorder –  or even that they have a medical disorder at all.

The reification of labels has caused havoc for decades with research. If you’ve got a bunch of children with different causes for their problems with reading, but you don’t know what the different causes are so you lump all the children together according to their DSM label; or another bunch with different causes for their problems with social interaction but lump them all together; or a third bunch with different causes for their problems maintaining focus, but you lump them all together; you are not likely to find common causes in each group for the signs and symptoms.  It’s this failure to find distinctive features at the group level that has been largely responsible for claims that dyslexia, autism or ADHD ‘don’t exist’, or that treatments that have evidently worked for some individuals must be spurious because they don’t work for other individuals or for the heterogeneous group as a whole.


Oddly, in his TES article, Tom refers to autism as an ‘identifiable condition’ but to dyslexia and ADHD as ‘crypto-pathologies’ even though the diagnostic status of autism in the DSM and ICD is on a par with that of ADHD, and with ‘specific learning disorder with impairment in reading‘ with dyslexia recognised as an alternative term (DSM), or ‘dyslexia and alexia‘ (ICD).  Delusional parasitosis, despite having the same diagnostic status and a plausible biological mechanism for its existence, is dismissed as ‘a condition that never was’.

Tom is entitled to take a view on diagnosis, obviously. He’s right to point out that reading difficulties can be due to lack of robust instruction, and inattention can be due to the absence of clear routines. He’s right to dismiss faddish simplistic (but often costly) remedies. But the research is clear that children can have difficulties with reading due to auditory and/or visual processing impairments (search Google scholar for ‘dyslexia visual auditory’), that they can have difficulties maintaining attention due to low dopamine levels – exactly what Ritalin addresses (Iversen, 2006), or that they can experience intolerable itching that feels as if it’s caused by parasites.

But Tom doesn’t refer to the research, and despite provisos such as acknowledging that some children suffer from ‘real and grave difficulties’ he effectively dismisses some of those difficulties as crypto-pathologies and implies they can be fixed by robust teaching and clear routines  –  or that they are just imaginary.  There’s a real risk, if the research is by-passed, of ‘robust teaching’ and ‘clear routines’ becoming the magic bullets and magic beans he rightly despises.


*Disorder implies a departure from the norm.  At one time, it was assumed the norm for each species was an optimal set of characteristics.  Now, the norm is statistically derived, based on 95% of the population.

§ Technically, symptoms are indicators of a disorder experienced only by the patient and signs are detectable by others.  ‘Symptoms’ is often used to include both.


Iversen, L (2006).  Speed, Ecstasy, Ritalin: The science of amphetamines.  Oxford University Press.

white knights and imaginary dragons: Tom Bennett on fidget-spinners

I’ve crossed swords – or more accurately, keyboards – with Tom Bennett, the government’s behaviour guru tsar adviser, a few times, mainly about learning styles. And about Ken Robinson. Ironic really, because broadly speaking we’re in agreement. Ken Robinson’s ideas about education are woolly and often appear to be based on opinion rather than evidence, and there’s clear evidence that teachers who use learning styles, thinking hats and brain gym probably are wasting their time. Synthetic phonics helps children read and whole school behaviour policies are essential for an effective school and so on…

My beef with Tom has been his tendency to push his conclusions further than the evidence warrants. Ken Robinson is ‘the butcher given a ticker tape parade by the National Union of Pigs‘.  Learning Styles are ‘the ouija board of serious educational research‘.  What raised red flags for me this time is a recent TES article by Tom prompted by the latest school-toy fad ‘fidget-spinners’.


Tom begins with claims that fidget-spinners can help children concentrate. He says “I await the peer-reviewed papers from the University of Mickey Mouse confirming these claims“, assuming that he knows what the evidence will be before he’s even seen it.  He then introduces the idea that ‘such things’ as fidget-spinners might help children with an ‘identifiable condition such as autism or sensory difficulties’, and goes on to cite comments from several experts about fidget-spinners in particular and sensory toys in general. We’re told “…if children habitually fidget, the correct path is for the teacher to help the child to learn better behaviour habits, unless you’ve worked with the SENCO and the family to agree on their use. The alternative is to enable and deepen the unhelpful behaviour. Our job is to support children in becoming independent, not cripple them with their own ticks [sic]”.

If a child’s fidgeting is problematic, I completely agree that a teacher’s first course of action should be to help them stop fidgeting, although Tom offers no advice about how to do this. I’d also agree that the first course of action in helping a fidgety child shouldn’t be to give them a fidget-toy.

There’s no question that children who just can’t seem to sit still, keep their hands still, or who incessantly chew their sleeves, are seeking sensory stimulation, because that’s what those activities are – by definition. It doesn’t follow that allowing children to walk about, or use fidget or chew toys will ‘cripple them with their own ticks’. These behaviours are not tics, and usually extinguish spontaneously over time. If they’re causing disruption in the classroom, questions need to be asked about school expectations and the suitability of the school provision for the child, not about learning unspecified ‘better behaviour habits’.


Tom then devotes an entire paragraph to, bizarrely, Listerine. His thesis is that sales of antiseptic mouthwash soared due to an advertising campaign persuading Americans that halitosis was a serious social problem. His evidence is a blogpost by Sarah Zhang, a science journalist.  Sarah’s focus is advertising that essentially invented problems to be cured by mouthwash or soap. Neither she nor Tom mention the pre-existing obsession with cleanliness that arose from the discovery – prior to the discovery of antibiotics – that a primary cause of death and debility was bacterial infections that could be significantly reduced by the use of alcohol rubs, boiling and soap.

itchy and scratchy

The Listerine advertising campaign leads Tom to consider ‘fake or misunderstood illnesses’ that he describes as ‘charlatan’. His examples are delusional parasitosis (people believe their skin is itching because it’s infested with parasites) and Morgellon’s (belief that the itching is caused by fibres). Tom says “But there are no fibres or parasites. It’s an entirely psycho-somatic condition. Pseudo sufferers turn up at their doctors scratching like mad, some even cutting themselves to dig out the imaginary threads and crypto-bugs. Some doctors even wearily prescribe placebos and creams that will relieve the “symptoms”. A condition that never was, dealt with by a cure that won’t work. Spread as much by belief as anything else, like fairies.”

Here, Tom is pushing the evidence way beyond its limits. The fact that the bugs or fibres are imaginary doesn’t mean the itching is imaginary. The skin contains several different types of tactile receptor that send information to various parts of the brain. The tactile sensory system is complex so there are several points at which a ‘malfunction’ could occur.  The fact that busy GPs – who for obvious reasons don’t have the time or resources to examine the functioning of a patient’s neural pathways at molecular level – wearily prescribe a placebo, says as much about the transmission of medical knowledge in the healthcare system as it does about patients’ beliefs.


Tom refers to delusional parasitosis and Morgellon’s as ‘crypto-pathologies’ – whatever that means – and then introduces us to some crypto-pathologies he claims are encountered in school; dyslexia and ADHD. As he points out dyslexia and ADHD are indeed labels for ‘a collection of observed symptoms’. He’s right that some children with difficulty reading might simply need good reading tuition, and those with attention problems might simply need a good relationship with their teacher and clear routines. As he points out “…our diagnostic protocol is often blunt. Because we’re unsure what it is we’re diagnosing, and it becomes an ontological problem“.  He then says “This matters when we pump children with drugs like Ritalin to stun them still.

Again, some of Tom’s claims are correct but others are not warranted by the evidence. In the UK, Ritalin is usually prescribed by a paediatrician or psychiatrist after an extensive assessment of the child, and its effects should be carefully monitored. It’s a stimulant that increases available levels of dopamine and norepinephrine and it often enhances the ability to concentrate. It isn’t ‘pumped into’ children and it doesn’t ‘stun them still’, In the UK at least, NICE guidelines indicate it should be used as a last resort. The fact that its use has doubled in the last decade is a worrying trend. This is more likely to be due to the crisis in child and adolescent mental health services, than to an assumption that all attention problems in children are caused by a supposed medical condition we call ADHD.

Tom, rightly, targets bullshit. He says it matters because “many children suffer from very real and very grave difficulties, and it behoves us as their academic and social guardians to offer support and remedy when we can”. Understandably he wants to drive his point home. But superficial analysis and use of hyperbole risk real and grave difficulties being marginalised at best and ridiculed at worst by teachers who don’t have the time/energy/inclination to check out the detail of what he claims.

Specialist education, health and care services for children have been in dire straits for many years and the situation isn’t getting any better. This means teachers are likely to have little information about the underlying causes of children’s difficulties in school. If teachers take what Tom says at face value, there’s a real risk that children with real difficulties, whether they need to move their fingers or chew in order to concentrate, experience unbearable itching, struggle to read because of auditory, visual or working memory impairments, or have levels of dopamine that prevent them from concentrating, will be seen by some as having ‘crypto-conditions’ that can be resolved by good teaching and clear routines. If they’re not resolved, then the condition must be ‘psycho-somatic’.  Using evidence to make some points, but ignoring it to make others means the slings and arrows Tom hurls at the snake-oil salesmen and white knights galloping to save us from imaginary dragons are quite likely to be used as ammunition against the very children he seeks to help.

phlogiston for beginners

Say “learning styles” to some teachers and you’re likely to get your head bitten off. Tom Bennett, the government’s behaviour tsar/guru/expert/advisor, really, really doesn’t like the idea of learning styles as he has made clear in a series of blogposts exploring the metaphor of the zombie.

I’ve come in for a bit of flak from various sources for suggesting that Bennett might have rather over-egged the learning styles pudding. I’ve been accused of not accepting the evidence, not admitting when I’m wrong, advancing neuromyths, being a learning styles advocate, being a closet learning styles advocate, and by implication not caring about the chiiiiiiiildren and being responsible for a metaphorical invasion by the undead. I refute all those accusations.

I’m still trying to figure out why learning styles have caused quite so much fuss. I understand that teachers might be a bit miffed about being told by schools to label children as visual, auditory or kinaesthetic (VAK) learners only to find there’s no evidence that they can be validly categorised in that way. But the time and money wasted on learning styles surely pales into insignificance next to the amounts squandered on the industry that’s sprung up around some questionable assessment methods, an SEN system that a Commons Select Committee pronounced not fit for purpose, or a teacher training system that for generations has failed to equip teachers with the skills they need to evaluate popular wheezes like VAK and brain gym.

And how many children have suffered actual harm as a result of being given a learning style label? I’m guessing very few compared to the number whose life has been blighted by failing the 11+, being labelled ‘educationally subnormal’, or more recent forms of failure to meet the often arbitrary requirements of the education system.  What is it about learning styles?

the learning styles neuromyth

I made the mistake of questioning some of the assumptions implicit in this article, notably that the concept of learning styles is a false belief, that it’s therefore a neuromyth and is somehow harmful in that it raises false hopes about transforming society.

My suggestion that the evidence for the learning styles concept is mixed rather than non-existent, that there are some issues around the idea of the neuromyth that need to be addressed, and that the VAK idea, even if wrong, probably isn’t the biggest hole in the education system’s bucket, was taken as a sign that my understanding of the scientific method must be flawed.

the evidence for aliens

One teacher (no names, no pack drill) said “This is like saying the ‘evidence for aliens is mixed’”.  No it isn’t. There are so many planets in the universe it’s highly unlikely Earth is the only one supporting life-forms, but so far, we have next to no evidence of their existence. But a learning style isn’t a life-form, it’s a construct, a label for phenomena that researchers have observed, and a pretty woolly label at that. It could refer to a wide range of very different phenomena, some of which are really out there, some of which are experimental artifacts, and some of which might be figments of a researchers’ imagination. It’s pointless speculating about whether learning styles exist or not because whether they exist or not depends on what you label as a ‘learning style’.  Life-forms are a different kettle of fish; there’s some debate around what constitutes a life-form and what doesn’t, but it’s far more tightly specified than any learning style ever has been.

you haven’t read everything

I was then chided for pointing out that Tom Bennett said he hadn’t finished reading the Coffield Learning Styles Review when (obviously) I hadn’t read everything there was to read on the subject either.   But I hadn’t  complained that Tom hadn’t read everything; I was pointing out that by his own admission in his book Teacher Proof he’d stopped reading before he got to the bit in the Coffield review which discusses learning styles models found to have validity and reliability, so it’s not surprising he came to a conclusion that Coffield didn’t support.

my evidence weighs more than your evidence

Then, “I’ve seen the tiny, tiny evidence you cite to support LS. Dwarfed by oceans of ‘no evidence’. There’s more evidence for ET than LS”. That’s not how the evaluation of scientific evidence works. It isn’t a case of putting the ‘for’ evidence in one pan of the scales and the ‘against’ evidence in the other and the heaviest evidence wins. On that basis, the heliocentric theories of Copernicus and Kepler would have never seen the light of day.
how about homeopathy?

Finally “How about homeopathy? Mixed evidence from studies.”   The implication is that if I’m not dismissing learning styles because the evidence is mixed, then I can’t dismiss homeopathy. Again the analogy doesn’t hold. Research shows that there is an effect associated with homeopathic treatments – something happens in some cases. But the theory of homeopathy doesn’t make sense in the context of what we know about biology, chemistry and physics. This suggests that the problem lies in the explanation for the effect, not the effect itself. But the concept of learning styles doesn’t conflict with what we know about the way people learn. It’s quite possible that people do have stable traits when it comes to learning. Whether or not they do, and if they do what those traits are is another matter.

Concluding from complex and variable evidence that learning styles don’t exist, and that not dismissing them out of hand is akin to believing in aliens and homeopathy, looks to me suspiciously like saying  “Phlogiston? Pfft! All that stuff about iron filings increasing in weight when they combust is a load of hooey.”

learning styles: a response to Greg Ashman

In a post entitled Why I’m happy to say that learning styles don’t exist Greg Ashman says that one of the arguments I used in my previous post about learning styles “seems to be about the semantics of falsification“. I’m not sure that semantics is quite the right term, but the falsification of hypotheses certainly was a key point. Greg points out that “falsification does not meaning proving with absolute certainty that something does not exist because you can’t do this and it would therefore be impossible to falsify anything”. I agree completely. It’s at the next step that Greg and I part company.

Greg seems to be arguing that because we can’t falsify a hypothesis with absolute certainty, sufficient evidence of falsification is enough to be going on with. That’s certainly true for science as a work-in-progress. But he then goes on to imply that if there’s little evidence that something exists, the lack of evidence for its existence is good enough to warrant us concluding it doesn’t exist.

I’m saying that because we can’t falsify a hypothesis with absolute certainty, we can never legitimately conclude that something doesn’t exist. All we can say is that it’s very unlikely to exist. Science isn’t about certainty, it’s about reducing uncertainty.

My starting point is that because we don’t know anything with absolute certainty, there’s no point making absolutist statements about whether things exist or not. That doesn’t get us anywhere except into pointless arguments.

Greg’s starting point appears to be that if there’s little evidence that something exists, we can safely assume it doesn’t exist, therefore we are justified in making absolutist claims about its existence.

Claiming categorically that learning styles, Santa Claus or fairies don’t exist is unlikely to have a massively detrimental impact on people’s lives. But putting the idea into teachers’ heads that good-enough falsification allows us to dismiss outright the existence of anything for which there’s little evidence is risky. The history of science is littered with tragic examples of theories being prematurely dismissed on the basis of little evidence – germ theory springing first to mind.

testing the learning styles hypothesis

Greg also says “a scientific hypothesis is one which makes a testable prediction. Learning styles theories do this.”

No they don’t. That’s the problem. Mathematicians can precisely define the terms in an equation. Philosophers can decide what they want the entities in their arguments to mean. Thanks to some sterling work on the part of taxonomists there’s now a strong consensus on what a swan, or a crow or duck-billed platypus are, rather than the appalling muddle that preceded it. But learning styles are not terms in an equation, or entities in philosophical arguments. They are not even like swans, crows or duck-billed platypuses; they are complex, fuzzy conceptual constructs. Unless you are very clear about how the particular constructs in your learning styles model can be measured, so that everyone who tests your model is measuring exactly the same thing, the hypotheses might be testable in principle but in reality it’s quite likely no one has has tested them properly. And that’s before you even get to what the conceptual constructs actually map on to in the real world.

This is a notorious problem for the social sciences. It doesn’t follow that all conceptual constructs are invalid, or that all hypotheses involving them are pseudoscience, or that the social sciences aren’t sciences at all. All it means is that social scientists often need to be a lot more rigorous than they have been.

I don’t understand why it’s so important for Daniel Willingham or Tom Bennett or Greg Ashman to categorise learning styles – or anything else for that matter – as existing or not. The evidence for the existence of Santa Claus, fairies or the Loch Ness monster is pretty flimsy, so most of us work on the assumption that they don’t exist. The fact that we can’t prove conclusively that they don’t exist doesn’t mean that we should be including them in lesson plans. But I’m not advocating the use of Santa Claus, fairies, the Loch Ness monster or learning styles in the classroom. I’m pointing out that saying ‘learning styles don’t exist’ goes well beyond what the evidence claims and, contrary to what Greg says in his post, implies that we can falsify a hypothesis with absolute certainty.

Absence of evidence is not evidence of absence. That’s an important scientific principle. It’s particularly relevant to a concept like learning styles, which is an umbrella term for a whole bunch of models encompassing a massive variety of allegedly stable traits, most of which have been poorly operationalized and poorly evaluated in terms of their contribution – or otherwise – to learning. The evidence about learning styles is weak, contradictory and inconclusive. I can’t see why we can’t just say that it’s weak, contradictory and inconclusive, so teachers would be well advised to give learning styles a wide berth – and leave it at that.

learning styles: what does Tom Bennett* think?

Tom Bennett’s disdain for learning styles is almost palpable, reminiscent at times of Richard Dawkins commenting on a papal pronouncement, but it started off being relatively tame. In May 2013, in a post on the ResearchEd2013 website coinciding with the publication of his book Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he asks ‘why are we still talking about learning styles?’ and claims “there is an overwhelming amount of evidence suggesting that learning styles do not exist, and that therefore we should not be instructing students according to these false preferences.

In August the same year for his New Scientist post Separating neuromyths from science in education, he tones down the claim a little, pointing out that learning styles models are “mostly not backed by credible evidence”.

But the following April, Tom’s back with a vitriologic vengeance in the TES with Zombie bølløcks: World War VAK isn’t over yet. He rightly – and colorfully – points out that time or resources shouldn’t be wasted on initiatives that have not been demonstrated to be effective. And he’s quite right to ask “where were the educationalists who read the papers, questioned the credentials and demanded the evidence?” But Bennett isn’t just questioning, he’s angry.

He’s thinking of putting on his “black Thinking Hat of reprobation and fury”. Why? Because “it’s all bølløcks, of course. It’s bølløcks squared, actually, because not only has recent and extensive investigation into learning styles shown absolutely no correlation between their use and any perceptible outcome in learning, not only has it been shown to have no connection to the latest ways we believe the mind works, but even investigation of the original research shows that it has no credible claim to be taken seriously. Learning Styles are the ouija board of serious educational research” and he includes a link to Pashler et al to prove it.

Six months later, Bennett teams up with Daniel Willingham for a TES piece entitled Classroom practice – Listen closely, learning styles are a lost cause in which Willingham reiterates his previous arguments and Tom contributes an opinion piece dismissing what he calls zombie theories, ranging from red ink negativity to Neuro-Linguistic Programming and Multiple Intelligences.

why learning styles are not a neuromyth

Tom’s anger would be justified if he were right. But he isn’t. In May 2013, in Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he says of the VAK model “And yet there is no evidence for it whatsoever. None. Every major study done to see if using learning style strategies actually work has come back with totally negative results” (p.144). He goes on to dismiss Kolb’s Learning Style Inventory and Honey and Mumford’s Learning Styles Questionnaire, adding “there are others but I’m getting tired just typing all the categories and wondering why they’re all so different and why the researchers disagree” (p.146). That tells us more about Tom’s evaluation of the research than it does about the research itself.

Education and training research has long suffered from a serious lack of rigour. One reason for that is that they are both heavily derived fields of discourse; education and training theory draws on disciplines as diverse as psychology, sociology, philosophy, politics, architecture, economics and medicine. Education and training researchers need a good understanding of a wide range of fields. Taking all relevant factors into account is challenging, and in the meantime teachers and trainers have to get on with the job. So it’s tempting to get an apparently effective learning model out there ASAP, rather than make sure it’s rigorously tested and systematically compared to other learning models first.

Review paper after review paper has come to similar conclusions when evaluating the evidence for learning styles models:

• there are many different learning styles models, featuring many different learning styles
• it’s difficult to compare models because they use different constructs
• the evidence supporting learning styles models is weak, often because of methodological issues
• some models do have validity or reliability; others don’t
• people do have different aptitudes in different sensory modalities, but
• there’s no evidence that teaching/training all students in their ‘best’ modality improves performance.

If Tom hadn’t got tired typing he might have discovered that some learning styles models have more validity than the three he mentions. And if he’d read the Coffield review more carefully he would have found out that the reason models are so different is because they are based on different theories and use different (often poorly operationalized) constructs and that researchers disagree for a host of reasons, a phenomenon he’d do well to get his head round if he wants teachers to get involved in research.

evaluating the evidence

Reviewers of learning styles models have evaluated the evidence by looking in detail at its content and quality and have then drawn general conclusions. They’ve examined, for example, the validity and reliability of component constructs, what hypotheses have been tested, the methods used in evaluating the models and whether studies have been peer-reviewed.

What they’ve found is that people do have learning styles (depending on how learning style is defined), but there are considerable variations in validity and reliability between learning styles models, and that overall the quality of the evidence isn’t very good. As a consequence, reviewers have been in general agreement that there isn’t enough evidence to warrant teachers investing time or resources in a learning styles approach in the classroom.

But Tom’s reasoning appears to move in the opposite direction; to start with the conclusion that teachers shouldn’t waste time or resources on learning styles, and to infer that;

variable evidence means all learning styles models can be rejected
poor quality evidence means all learning styles models can be rejected
• if some learning styles models are invalid and unreliable they must all be invalid and unreliable
if the evidence is variable and poor and some learning styles models are invalid or unreliable, then
• learning styles don’t exist.

definitions of learning style

It’s Daniel Willingham’s video Learning styles don’t exist that sums it up for Tom. So why does Willingham say learning styles don’t exist? It all depends on definitions, it seems. On his learning styles FAQ page Willingham says;

I think that often when people believe that they observe obvious evidence for learning styles, they are mistaking it for abilityThe idea that people differ in ability is not controversial—everyone agrees with that. Some people are good at dealing with space, some people have a good ear for music, etc. So the idea of “style” really ought to mean something different. If it just means ability, there’s not much point in adding the new term.

This is where Willingham lost me. Obviously, a preference for learning in a particular way is not the same as an ability to learn in a particular way. And I agree that there’s no point talking about style if what you mean is ability. The VAK model claims that preference is an indicator of ability, and the evidence doesn’t support that hypothesis.

But not all learning styles models are about preference; most claim to identify patterns of ability. That’s why learning styles models have proliferated; employers want a quick overall assessment of employees’ strengths and weaknesses when it comes to learning. Because the models encompass factors other than ability – such as personality and ways of approaching problem-solving – referring to learning styles rather than ability seems reasonable.

So if the idea that people differ in ability is not controversial, many learning styles models claim to assess ability, and some are valid and/or reliable, how do Willingham and Bennett arrive at the conclusion that learning styles don’t exist?

The answer, I suspect, is that what they are equating learning styles with the VAK model, most widely used in primary education. It’s no accident that Coffield et al evaluated learning styles and pedagogy in post-16 learning; it’s the world outside the education system that’s the main habitat of learning styles models. It’s fair to say there’s no evidence to support the VAK model – and many others – and that it’s not worth teachers investing time and effort in them. But the evidence simply doesn’t warrant lumping together all learning styles models and dismissing them outright.

taking liberties with the evidence

I can understand that if you’re a teacher who’s been consistently told that learning styles are the way to go and then discover there’s insufficient evidence to warrant you using them, you might be a bit miffed. But Tom’s reprobation and fury doesn’t warrant him taking liberties with the evidence. This is where I think Tom’s thinking goes awry;

• If the evidence supporting learning styles models is variable it’s variable. It means some learning styles models are probably rubbish but some aren’t. Babies shouldn’t be thrown out with bathwater.

• If the evidence evaluating learning styles is of poor quality, it’s of poor quality. You can’t conclude from poor quality evidence that learning styles models are rubbish. You can’t conclude anything from poor quality evidence.

• If the evidence for learning styles models is variable and of poor quality, it isn’t safe to conclude that learning styles don’t exist. Especially if review paper after review paper has concluded that they do – depending on your definition of learning styles.

I can understand why Willingham and Bennett want to alert teachers to the lack of evidence for the VAK learning styles model. But I felt Daniel Willingham’s claim that learning styles don’t exist is misleading and that Tom Bennett’s vitriol was unjustified. There’s a real risk in the case of learning styles of one neuromyth being replaced by another.

*Tom appears to have responded to this post here and here. With yet another article two more articles about zombies.

Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.

learning styles: the evidence

The PTA meeting was drawing to a close. The decision to buy more books for the library instead of another interactive whiteboard had been unanimous, and the conversation had turned to educational fads.

“Now, of course,” the headteacher was saying, “it’s all learning styles. We’re visual, auditory or kinaesthetic learners – you know, Howard Gardner’s Multiple Intelligences.” His comment caught my attention because I was familiar with Gardner’s managerial competencies, but couldn’t recall them having anything to do with sensory modalities and I didn’t know they’d made their way into primary education. My curiosity piqued, I read Gardner’s book Frames of Mind: The Theory of Multiple Intelligences. It prompted me to delve into his intriguing earlier account of working with brain-damaged patients – The Shattered Mind.

Where does the VAK model come from?

Gardner’s multiple intelligences model was clearly derived from his pretty solid knowledge of brain function, but wherever the idea of visual, auditory and kinaesthetic (VAK) learning styles had come from, it didn’t look like it came from Gardner. A bit of Googling learning styles kept bringing up the names Dunn and Dunn, but I couldn’t find anything on the VAK model’s origins. So I phoned a friend. “It’s based on Neuro-Linguistic Programming”, she said.

This didn’t bode well. Neuro-Linguistic Programming (NLP) is a therapeutic approach devised in the 1970s by Richard Bandler, a psychology graduate, and John Grinder, then an assistant professor of psychology who, like Frank Smith, had worked in George magical-number-seven-plus-or-minus-two Miller’s lab and been influenced by Noam Chomsky’s ideas about linguistics.

If I’ve understood Bandler and Grinder’s idea correctly, they proposed that insights into people’s internal, subjective sensory representations can be gleaned from their eye movements and the words they use. According to their model, this makes it possible to change those internal representations to reduce anxiety or eliminate phobias. Although there are some valid elements in the theory behind NLP, evaluations of the model have in the main been critical and evidence supporting the effectiveness of NLP as a therapeutic approach has been notable by its absence (see e.g. Witkowski, 2010).

So the VAK Learning Styles model appeared to be an educational intervention derived from a debatable theory and a therapeutic technique that doesn’t work too well.

Evaluating the evidence

Soon after I’d phoned my friend, in 2004 Frank Coffield and colleagues published a systematic and rigorous evaluation of 13 learning styles models used in post-16 learning and found the reliability and validity of many of them wanting. They didn’t evaluate the VAK model as such, but did review the Dunn and Dunn Learning Styles Inventory which is very similar, and it didn’t come out with flying colours. I mentally consigned VAK Learning Styles to my educational fads wastebasket.

Fast forward a decade. Teachers using social media were becoming increasingly dismissive of VAK Learning Styles and of learning styles in general. Their objections appeared to trace back to Tom Bennett’s 2013 book Teacher Proof. Tom doesn’t like learning styles. In Separating neuromyths from science in education, an article on the New Scientist website, he summarises his ‘hitlist’ of neuromyths. He claims the VAK model is “the most popular version” of the learning styles theory, and that it originated in Neil Fleming’s VARK (visual, auditory, read-write, kinaesthetic) concept. According to Fleming, a teacher from New Zealand, his model does indeed derive from Neuro-Linguistic Programming. Bennett says the Coffield review “found up to 71 learning styles had been described, mostly not backed by credible evidence”.

This is where things started to get a bit confusing. The Coffield review identified 71 different learning styles models and evaluated 13 of them against four basic criteria; internal consistency, test-retest reliability, construct validity and predictive validity. The results were mixed, ranging from one model that met all four criteria to two that met none. Five of the 13 use the words ‘learning style(s)’ in their name. They included Dunn and Dunn’s Learning Styles Inventory that features visual, auditory, kinaesthetic and tactile (VAKT) modalities, but not Fleming’s VARK model nor the popular VAK Learning Styles model as such.

Having cited John Hattie’s research on the effect size of educational interventions that found the impact of individualisation to be relatively low, Coffield et al concluded “it seems sensible to concentrate limited resources and staff efforts on those interventions that have the largest effect sizes” (p.134).

A later review of learning styles by Pashler et al (2008) took a different approach. The authors evaluated the evidence for what they call the meshing hypothesis; the claim that individualizing instruction to the learner’s style can enable them to achieve a better learning outcome. They found “plentiful evidence arguing that people differ in the degree to which they have some fairly specific aptitudes for different kinds of thinking and for processing different types of information” (p.105). But like the Coffield team, Pashler et al concluded “at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number” (p.105).

Populations, groups and individuals

The research by Coffield, Pashler and Hattie highlights a core challenge for any research relating to large populations; that what is true at the population level might not hold for minority groups or specific individuals – and vice versa. Behavioural studies that compare responses to different treatments usually present results at the group level (see for example Pashler et al’s Fig 1). Results from individuals that differ substantially from the group are usually treated as ‘outliers’ and overlooked. But a couple of high or low scores in a small group can make a substantial difference to the mean. It’s useful to know how the average student behaves if you’re researching teaching methods or developing educational policy, but the challenge for teachers is that they don’t teach the average student – they have to teach students across the range – including the outliers.

So although it makes sense at the population level to focus on Hattie’s top types of intervention, those interventions might not yield the best outcomes for particular classes, groups or individual students. And although the effect sizes of interventions involving the personal attributes of students are relatively low, they are far from non-existent.

In short, reviewers have noted that:
• there is evidence to support the idea that people have particular aptitudes for particular types of learning,
• some learning styles models have some validity and reliability,
• there is little evidence that teaching children in their ‘best’ sensory modality will improve learning outcomes,
• given the limited resources available, the evidence doesn’t warrant teachers investing a lot of time and effort in learning styles assessments.

But you wouldn’t know that from reading some commentaries on learning styles. In the next couple of posts, I want to look at what Daniel Willingham and Tom Bennett have to say about them.

Bandler, R. & Grinder, J (1975). The structure of magic I: A book about language and therapy. Science & Behaviour Books, Palo Alto.

Bandler, R. & Grinder, J (1979). Frogs into Princes: The introduction to Neuro-Linguistic Programming. Eden Grove Editions (1990).

Bennett, T. (2013). Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it, Routledge.

Coffield F., Moseley D., Hall, E. & Ecclestone, K (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.

Gardner, H. (1977). The Shattered Mind: The person after brain damage. Routledge & Kegan Paul.

Gardner, H. (1983). Frames of Mind: The theory of multiple intelligences. Fontana (1993).

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.

Witkowski, T (2010). Thirty-Five Years of Research on Neuro-Linguistic Programming.
NLP Research Data Base. State of the Art or Pseudoscientific Decoration? Polish Psychological Bulletin 41, 58-66.

the view from the signpost: learning styles

Discovering that some popular teaching approaches (Learning Styles, Brain Gym, Thinking Hats) have less-than-robust support from research has prompted teachers to pay more attention to the evidence for their classroom practice. Teachers don’t have much time to plough through complex research findings. What they want are summaries, signposts to point them in the right direction. But research is a work in progress. Findings are often not clear-cut but contradictory, inconclusive or ambiguous. So it’s not surprising that some signposts – ‘do use synthetic phonics, ‘don’t use Learning Styles’ – often spark heated discussion. The discussions often cover the same ground. In this post, I want look at some recurring issues in debates about synthetic phonics (SP) and Learning Styles (LS).

Take-home messages

Synthetic phonics is an approach to teaching reading that begins by developing children’s awareness of the phonemes within words, links the phonemes with corresponding graphemes, and uses the grapheme-phoneme correspondence to decode the written word. Overall, the reading acquisition research suggests that SP is the most efficient method we’ve found to date of teaching reading. So the take-home message is ‘do use synthetic phonics’.

What most teachers mean by Learning Styles is a specific model developed by Fleming and Mills (1992) derived from the theory behind Neuro-Linguistic Programming. It proposes that students learn better in their preferred sensory modality – visual, aural, read/write or kinaesthetic (VARK). (The modalities are often reduced in practice to VAK – visual, auditory and kinaesthetic.) But ‘learning styles’ is also a generic term for a multitude of instructional models used in education and training. Coffield et al (2004) identified no fewer than 71 of them. Coffield et al’s evaluation didn’t include the VARK or VAK models, but a close relative – Dunn and Dunn’s Learning Styles Questionnaire – didn’t fare too well when tested against Coffield’s reliability and validity criteria (p.139). Other models did better, including Allinson and Hayes Cognitive Styles Index that met all the criteria.

The take-home message for teachers from Coffield and other reviews is that given the variation in validity and reliability between learning styles models, it isn’t worth teachers investing time and effort in using any learning style approach to teaching. So far so good. If the take-home messages are clear, why the heated debate?

Lumping and splitting

‘Lumping’ and ‘splitting’ refer to different ways in which people categorise specific examples; they’re terms used mainly by taxonomists. ‘Lumpers’ tend to use broad categories and ‘splitters’ narrow ones. Synthetic phonics proponents rightly emphasise precision in the way systematic, synthetic phonics (SSP) is used to teach children to read. SSP is a systematic not a scattergun approach, it involves building up words from phonemes not breaking words down to phonemes, and developing phonemic awareness rather than looking at pictures or word shapes. SSP advocates are ‘splitters’ extraordinaire – in respect of SSP practice at least. Learning styles critics, by contrast, tend to lump all learning styles together, often failing to make a distinction between LS models.

SP proponents also become ‘lumpers’ where other approaches to reading acquisition are concerned. Whether it’s whole language, whole words or mixed methods, it makes no difference… it’s not SSP. And both SSP proponents and LS critics are often ‘lumpers’ in respect of the research behind the particular take-home message they’ve embraced so enthusiastically. So what? Why does lumping or splitting matter?

Lumping all non-SSP reading methods together or all learning styles models together matters because the take-home messages from the research are merely signposts pointing busy practitioners in the right direction, not detailed maps of the territory. The signposts tell us very little about the research itself. Peering at the research through the spectacles of the take-home message is likely to produce a distorted view.

The distorted view from the signpost

The research process consists of several stages, including those illustrated in the diagram below.
theory to application
Each stage might include several elements. Some of the elements might eventually emerge as robust (green), others might be turn out to be flawed (red). The point of the research is to find out which is which. At any given time it will probably be unclear whether some components at each stage of the research process are flawed or not. Uncertainty is an integral part of scientific research. The history of science is littered with findings initially dismissed as rubbish that later ushered in a sea-change in thinking, and others that have been greeted as the Next Big Thing that have since been consigned to the trash.

Some of the SP and LS research findings have been contradictory, inconclusive or ambiguous. That’s par for the course. Despite the contradictions, unclear results and ambiguities, there might be general agreement about which way the signposts for practitioners are pointing. That doesn’t mean it’s OK to work backwards from the signpost and make assumptions about the research. In the diagram, there’s enough uncertainty in the research findings to put a question mark over all potential applications. But all that question mark itself tells us is that there’s uncertainty involved. A minor tweak to the theory could explain the contradictory, inconclusive or ambiguous results and then it would be green lights all the way down.

But why does that matter to teachers? It’s the signposts that are important to them, not the finer points of research methodology or statistical analysis. It matters because some of the teachers who are the most committed supporters of SP or critics of LS are also the most vociferous advocates of evidence-based practice.

Evidence: contradictory, inconclusive or ambiguous?

Decades of research into reading acquisition broadly support the use of synthetic phonics for teaching reading, although many of the research findings aren’t unambiguous. One example is the study carried out in Clackmannanshire by Rhona Johnston and Joyce Watson. The overall conclusion is that SP leads to big improvements in reading and spelling, but closer inspection of the results shows they are not entirely clear-cut, and the study’s methodology has been criticised. But you’re unlikely to know that if you rely on SP advocates for an evaluation of the evidence. Personally, I can’t see a problem with saying ‘the research evidence broadly supports the use of synthetic phonics for teaching reading’ and leaving it at that.

The evidence relating to learning styles models is also not watertight, although in this case, it suggests they are mostly not effective. But again, you’re unlikely to find out about the ambiguities from learning styles critics. Tom Bennett, for example, doesn’t like learning styles – as he makes abundantly clear in a TES blog post entitled “Zombie bølløcks: World War VAK isn’t over yet.”

The post is about the VAK Learning Styles model. But in the ‘Voodoo teaching’ chapter of his book Teacher Proof, Bennett concludes about learning styles in general “it is of course, complete rubbish as far as I can see” (p.147). Then he hedges his bets in a footnote; “IN MY OPINION”.

Tom’s an influential figure – government behaviour adviser, driving force behind the ResearchEd conferences and a frequent commentator on educational issues in the press. He’s entitled to lump together all learning styles models if he wants to and to write colourful opinion pieces about them if he gets the chance, but presenting the evidence in terms of his opinion, and missing out evidence that doesn’t support his opinion is misleading. It’s also at odds with an evidence-based approach to practice. Saying there’s mixed evidence for the effectiveness of learning styles models doesn’t take more words than implying there’s none.

So why don’t supporters in the case of SP, or critics in the case of LS, say what the evidence says, rather than what the signposts say? I’d hazard a guess it’s because they’re worried that teachers will see contradictory, inconclusive or ambiguous evidence as providing a loophole that gives them licence to carry on with their pet pedagogies regardless. But the risk of looking at the signpost rather than the evidence is that one set of dominant opinions will be replaced by another.

In the next few posts, I’ll be looking more closely at the learning styles evidence and what some prominent critics have to say about it.


David Didau responded to my thoughts about signposts and learning styles on his blog. Our discussion in the comments section revealed that he and I use the term ‘evidence’ to mean different things. Using words in different ways. Could explain everything.

Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.