phlogiston for beginners

Say “learning styles” to some teachers and you’re likely to get your head bitten off. Tom Bennett, the government’s behaviour tsar/guru/expert/advisor, really, really doesn’t like the idea of learning styles as he has made clear in a series of blogposts exploring the metaphor of the zombie.

I’ve come in for a bit of flak from various sources for suggesting that Bennett might have rather over-egged the learning styles pudding. I’ve been accused of not accepting the evidence, not admitting when I’m wrong, advancing neuromyths, being a learning styles advocate, being a closet learning styles advocate, and by implication not caring about the chiiiiiiiildren and being responsible for a metaphorical invasion by the undead. I refute all those accusations.

I’m still trying to figure out why learning styles have caused quite so much fuss. I understand that teachers might be a bit miffed about being told by schools to label children as visual, auditory or kinaesthetic (VAK) learners only to find there’s no evidence that they can be validly categorised in that way. But the time and money wasted on learning styles surely pales into insignificance next to the amounts squandered on the industry that’s sprung up around some questionable assessment methods, an SEN system that a Commons Select Committee pronounced not fit for purpose, or a teacher training system that for generations has failed to equip teachers with the skills they need to evaluate popular wheezes like VAK and brain gym.

And how many children have suffered actual harm as a result of being given a learning style label? I’m guessing very few compared to the number whose life has been blighted by failing the 11+, being labelled ‘educationally subnormal’, or more recent forms of failure to meet the often arbitrary requirements of the education system.  What is it about learning styles?

the learning styles neuromyth

I made the mistake of questioning some of the assumptions implicit in this article, notably that the concept of learning styles is a false belief, that it’s therefore a neuromyth and is somehow harmful in that it raises false hopes about transforming society.

My suggestion that the evidence for the learning styles concept is mixed rather than non-existent, that there are some issues around the idea of the neuromyth that need to be addressed, and that the VAK idea, even if wrong, probably isn’t the biggest hole in the education system’s bucket, was taken as a sign that my understanding of the scientific method must be flawed.

the evidence for aliens

One teacher (no names, no pack drill) said “This is like saying the ‘evidence for aliens is mixed’”.  No it isn’t. There are so many planets in the universe it’s highly unlikely Earth is the only one supporting life-forms, but so far, we have next to no evidence of their existence. But a learning style isn’t a life-form, it’s a construct, a label for phenomena that researchers have observed, and a pretty woolly label at that. It could refer to a wide range of very different phenomena, some of which are really out there, some of which are experimental artifacts, and some of which might be figments of a researchers’ imagination. It’s pointless speculating about whether learning styles exist or not because whether they exist or not depends on what you label as a ‘learning style’.  Life-forms are a different kettle of fish; there’s some debate around what constitutes a life-form and what doesn’t, but it’s far more tightly specified than any learning style ever has been.

you haven’t read everything

I was then chided for pointing out that Tom Bennett said he hadn’t finished reading the Coffield Learning Styles Review when (obviously) I hadn’t read everything there was to read on the subject either.   But I hadn’t  complained that Tom hadn’t read everything; I was pointing out that by his own admission in his book Teacher Proof he’d stopped reading before he got to the bit in the Coffield review which discusses learning styles models found to have validity and reliability, so it’s not surprising he came to a conclusion that Coffield didn’t support.

my evidence weighs more than your evidence

Then, “I’ve seen the tiny, tiny evidence you cite to support LS. Dwarfed by oceans of ‘no evidence’. There’s more evidence for ET than LS”. That’s not how the evaluation of scientific evidence works. It isn’t a case of putting the ‘for’ evidence in one pan of the scales and the ‘against’ evidence in the other and the heaviest evidence wins. On that basis, the heliocentric theories of Copernicus and Kepler would have never seen the light of day.
 
how about homeopathy?

Finally “How about homeopathy? Mixed evidence from studies.”   The implication is that if I’m not dismissing learning styles because the evidence is mixed, then I can’t dismiss homeopathy. Again the analogy doesn’t hold. Research shows that there is an effect associated with homeopathic treatments – something happens in some cases. But the theory of homeopathy doesn’t make sense in the context of what we know about biology, chemistry and physics. This suggests that the problem lies in the explanation for the effect, not the effect itself. But the concept of learning styles doesn’t conflict with what we know about the way people learn. It’s quite possible that people do have stable traits when it comes to learning. Whether or not they do, and if they do what those traits are is another matter.

Concluding from complex and variable evidence that learning styles don’t exist, and that not dismissing them out of hand is akin to believing in aliens and homeopathy, looks to me suspiciously like saying  “Phlogiston? Pfft! All that stuff about iron filings increasing in weight when they combust is a load of hooey.”

Advertisements

learning styles: what does Tom Bennett* think?

Tom Bennett’s disdain for learning styles is almost palpable, reminiscent at times of Richard Dawkins commenting on a papal pronouncement, but it started off being relatively tame. In May 2013, in a post on the ResearchEd2013 website coinciding with the publication of his book Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he asks ‘why are we still talking about learning styles?’ and claims “there is an overwhelming amount of evidence suggesting that learning styles do not exist, and that therefore we should not be instructing students according to these false preferences.

In August the same year for his New Scientist post Separating neuromyths from science in education, he tones down the claim a little, pointing out that learning styles models are “mostly not backed by credible evidence”.

But the following April, Tom’s back with a vitriologic vengeance in the TES with Zombie bølløcks: World War VAK isn’t over yet. He rightly – and colorfully – points out that time or resources shouldn’t be wasted on initiatives that have not been demonstrated to be effective. And he’s quite right to ask “where were the educationalists who read the papers, questioned the credentials and demanded the evidence?” But Bennett isn’t just questioning, he’s angry.

He’s thinking of putting on his “black Thinking Hat of reprobation and fury”. Why? Because “it’s all bølløcks, of course. It’s bølløcks squared, actually, because not only has recent and extensive investigation into learning styles shown absolutely no correlation between their use and any perceptible outcome in learning, not only has it been shown to have no connection to the latest ways we believe the mind works, but even investigation of the original research shows that it has no credible claim to be taken seriously. Learning Styles are the ouija board of serious educational research” and he includes a link to Pashler et al to prove it.

Six months later, Bennett teams up with Daniel Willingham for a TES piece entitled Classroom practice – Listen closely, learning styles are a lost cause in which Willingham reiterates his previous arguments and Tom contributes an opinion piece dismissing what he calls zombie theories, ranging from red ink negativity to Neuro-Linguistic Programming and Multiple Intelligences.

why learning styles are not a neuromyth

Tom’s anger would be justified if he were right. But he isn’t. In May 2013, in Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he says of the VAK model “And yet there is no evidence for it whatsoever. None. Every major study done to see if using learning style strategies actually work has come back with totally negative results” (p.144). He goes on to dismiss Kolb’s Learning Style Inventory and Honey and Mumford’s Learning Styles Questionnaire, adding “there are others but I’m getting tired just typing all the categories and wondering why they’re all so different and why the researchers disagree” (p.146). That tells us more about Tom’s evaluation of the research than it does about the research itself.

Education and training research has long suffered from a serious lack of rigour. One reason for that is that they are both heavily derived fields of discourse; education and training theory draws on disciplines as diverse as psychology, sociology, philosophy, politics, architecture, economics and medicine. Education and training researchers need a good understanding of a wide range of fields. Taking all relevant factors into account is challenging, and in the meantime teachers and trainers have to get on with the job. So it’s tempting to get an apparently effective learning model out there ASAP, rather than make sure it’s rigorously tested and systematically compared to other learning models first.

Review paper after review paper has come to similar conclusions when evaluating the evidence for learning styles models:

• there are many different learning styles models, featuring many different learning styles
• it’s difficult to compare models because they use different constructs
• the evidence supporting learning styles models is weak, often because of methodological issues
• some models do have validity or reliability; others don’t
• people do have different aptitudes in different sensory modalities, but
• there’s no evidence that teaching/training all students in their ‘best’ modality improves performance.

If Tom hadn’t got tired typing he might have discovered that some learning styles models have more validity than the three he mentions. And if he’d read the Coffield review more carefully he would have found out that the reason models are so different is because they are based on different theories and use different (often poorly operationalized) constructs and that researchers disagree for a host of reasons, a phenomenon he’d do well to get his head round if he wants teachers to get involved in research.

evaluating the evidence

Reviewers of learning styles models have evaluated the evidence by looking in detail at its content and quality and have then drawn general conclusions. They’ve examined, for example, the validity and reliability of component constructs, what hypotheses have been tested, the methods used in evaluating the models and whether studies have been peer-reviewed.

What they’ve found is that people do have learning styles (depending on how learning style is defined), but there are considerable variations in validity and reliability between learning styles models, and that overall the quality of the evidence isn’t very good. As a consequence, reviewers have been in general agreement that there isn’t enough evidence to warrant teachers investing time or resources in a learning styles approach in the classroom.

But Tom’s reasoning appears to move in the opposite direction; to start with the conclusion that teachers shouldn’t waste time or resources on learning styles, and to infer that;

variable evidence means all learning styles models can be rejected
poor quality evidence means all learning styles models can be rejected
• if some learning styles models are invalid and unreliable they must all be invalid and unreliable
if the evidence is variable and poor and some learning styles models are invalid or unreliable, then
• learning styles don’t exist.

definitions of learning style

It’s Daniel Willingham’s video Learning styles don’t exist that sums it up for Tom. So why does Willingham say learning styles don’t exist? It all depends on definitions, it seems. On his learning styles FAQ page Willingham says;

I think that often when people believe that they observe obvious evidence for learning styles, they are mistaking it for abilityThe idea that people differ in ability is not controversial—everyone agrees with that. Some people are good at dealing with space, some people have a good ear for music, etc. So the idea of “style” really ought to mean something different. If it just means ability, there’s not much point in adding the new term.

This is where Willingham lost me. Obviously, a preference for learning in a particular way is not the same as an ability to learn in a particular way. And I agree that there’s no point talking about style if what you mean is ability. The VAK model claims that preference is an indicator of ability, and the evidence doesn’t support that hypothesis.

But not all learning styles models are about preference; most claim to identify patterns of ability. That’s why learning styles models have proliferated; employers want a quick overall assessment of employees’ strengths and weaknesses when it comes to learning. Because the models encompass factors other than ability – such as personality and ways of approaching problem-solving – referring to learning styles rather than ability seems reasonable.

So if the idea that people differ in ability is not controversial, many learning styles models claim to assess ability, and some are valid and/or reliable, how do Willingham and Bennett arrive at the conclusion that learning styles don’t exist?

The answer, I suspect, is that what they are equating learning styles with the VAK model, most widely used in primary education. It’s no accident that Coffield et al evaluated learning styles and pedagogy in post-16 learning; it’s the world outside the education system that’s the main habitat of learning styles models. It’s fair to say there’s no evidence to support the VAK model – and many others – and that it’s not worth teachers investing time and effort in them. But the evidence simply doesn’t warrant lumping together all learning styles models and dismissing them outright.

taking liberties with the evidence

I can understand that if you’re a teacher who’s been consistently told that learning styles are the way to go and then discover there’s insufficient evidence to warrant you using them, you might be a bit miffed. But Tom’s reprobation and fury doesn’t warrant him taking liberties with the evidence. This is where I think Tom’s thinking goes awry;

• If the evidence supporting learning styles models is variable it’s variable. It means some learning styles models are probably rubbish but some aren’t. Babies shouldn’t be thrown out with bathwater.

• If the evidence evaluating learning styles is of poor quality, it’s of poor quality. You can’t conclude from poor quality evidence that learning styles models are rubbish. You can’t conclude anything from poor quality evidence.

• If the evidence for learning styles models is variable and of poor quality, it isn’t safe to conclude that learning styles don’t exist. Especially if review paper after review paper has concluded that they do – depending on your definition of learning styles.

I can understand why Willingham and Bennett want to alert teachers to the lack of evidence for the VAK learning styles model. But I felt Daniel Willingham’s claim that learning styles don’t exist is misleading and that Tom Bennett’s vitriol was unjustified. There’s a real risk in the case of learning styles of one neuromyth being replaced by another.

*Tom appears to have responded to this post here and here. With yet another article two more articles about zombies.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.

learning styles: the evidence

The PTA meeting was drawing to a close. The decision to buy more books for the library instead of another interactive whiteboard had been unanimous, and the conversation had turned to educational fads.

“Now, of course,” the headteacher was saying, “it’s all learning styles. We’re visual, auditory or kinaesthetic learners – you know, Howard Gardner’s Multiple Intelligences.” His comment caught my attention because I was familiar with Gardner’s managerial competencies, but couldn’t recall them having anything to do with sensory modalities and I didn’t know they’d made their way into primary education. My curiosity piqued, I read Gardner’s book Frames of Mind: The Theory of Multiple Intelligences. It prompted me to delve into his intriguing earlier account of working with brain-damaged patients – The Shattered Mind.

Where does the VAK model come from?

Gardner’s multiple intelligences model was clearly derived from his pretty solid knowledge of brain function, but wherever the idea of visual, auditory and kinaesthetic (VAK) learning styles had come from, it didn’t look like it came from Gardner. A bit of Googling learning styles kept bringing up the names Dunn and Dunn, but I couldn’t find anything on the VAK model’s origins. So I phoned a friend. “It’s based on Neuro-Linguistic Programming”, she said.

This didn’t bode well. Neuro-Linguistic Programming (NLP) is a therapeutic approach devised in the 1970s by Richard Bandler, a psychology graduate, and John Grinder, then an assistant professor of psychology who, like Frank Smith, had worked in George magical-number-seven-plus-or-minus-two Miller’s lab and been influenced by Noam Chomsky’s ideas about linguistics.

If I’ve understood Bandler and Grinder’s idea correctly, they proposed that insights into people’s internal, subjective sensory representations can be gleaned from their eye movements and the words they use. According to their model, this makes it possible to change those internal representations to reduce anxiety or eliminate phobias. Although there are some valid elements in the theory behind NLP, evaluations of the model have in the main been critical and evidence supporting the effectiveness of NLP as a therapeutic approach has been notable by its absence (see e.g. Witkowski, 2010).

So the VAK Learning Styles model appeared to be an educational intervention derived from a debatable theory and a therapeutic technique that doesn’t work too well.

Evaluating the evidence

Soon after I’d phoned my friend, in 2004 Frank Coffield and colleagues published a systematic and rigorous evaluation of 13 learning styles models used in post-16 learning and found the reliability and validity of many of them wanting. They didn’t evaluate the VAK model as such, but did review the Dunn and Dunn Learning Styles Inventory which is very similar, and it didn’t come out with flying colours. I mentally consigned VAK Learning Styles to my educational fads wastebasket.

Fast forward a decade. Teachers using social media were becoming increasingly dismissive of VAK Learning Styles and of learning styles in general. Their objections appeared to trace back to Tom Bennett’s 2013 book Teacher Proof. Tom doesn’t like learning styles. In Separating neuromyths from science in education, an article on the New Scientist website, he summarises his ‘hitlist’ of neuromyths. He claims the VAK model is “the most popular version” of the learning styles theory, and that it originated in Neil Fleming’s VARK (visual, auditory, read-write, kinaesthetic) concept. According to Fleming, a teacher from New Zealand, his model does indeed derive from Neuro-Linguistic Programming. Bennett says the Coffield review “found up to 71 learning styles had been described, mostly not backed by credible evidence”.

This is where things started to get a bit confusing. The Coffield review identified 71 different learning styles models and evaluated 13 of them against four basic criteria; internal consistency, test-retest reliability, construct validity and predictive validity. The results were mixed, ranging from one model that met all four criteria to two that met none. Five of the 13 use the words ‘learning style(s)’ in their name. They included Dunn and Dunn’s Learning Styles Inventory that features visual, auditory, kinaesthetic and tactile (VAKT) modalities, but not Fleming’s VARK model nor the popular VAK Learning Styles model as such.

Having cited John Hattie’s research on the effect size of educational interventions that found the impact of individualisation to be relatively low, Coffield et al concluded “it seems sensible to concentrate limited resources and staff efforts on those interventions that have the largest effect sizes” (p.134).

A later review of learning styles by Pashler et al (2008) took a different approach. The authors evaluated the evidence for what they call the meshing hypothesis; the claim that individualizing instruction to the learner’s style can enable them to achieve a better learning outcome. They found “plentiful evidence arguing that people differ in the degree to which they have some fairly specific aptitudes for different kinds of thinking and for processing different types of information” (p.105). But like the Coffield team, Pashler et al concluded “at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number” (p.105).

Populations, groups and individuals

The research by Coffield, Pashler and Hattie highlights a core challenge for any research relating to large populations; that what is true at the population level might not hold for minority groups or specific individuals – and vice versa. Behavioural studies that compare responses to different treatments usually present results at the group level (see for example Pashler et al’s Fig 1). Results from individuals that differ substantially from the group are usually treated as ‘outliers’ and overlooked. But a couple of high or low scores in a small group can make a substantial difference to the mean. It’s useful to know how the average student behaves if you’re researching teaching methods or developing educational policy, but the challenge for teachers is that they don’t teach the average student – they have to teach students across the range – including the outliers.

So although it makes sense at the population level to focus on Hattie’s top types of intervention, those interventions might not yield the best outcomes for particular classes, groups or individual students. And although the effect sizes of interventions involving the personal attributes of students are relatively low, they are far from non-existent.

In short, reviewers have noted that:
• there is evidence to support the idea that people have particular aptitudes for particular types of learning,
and
• some learning styles models have some validity and reliability,
but
• there is little evidence that teaching children in their ‘best’ sensory modality will improve learning outcomes,
so
• given the limited resources available, the evidence doesn’t warrant teachers investing a lot of time and effort in learning styles assessments.

But you wouldn’t know that from reading some commentaries on learning styles. In the next couple of posts, I want to look at what Daniel Willingham and Tom Bennett have to say about them.

Bibliography
Bandler, R. & Grinder, J (1975). The structure of magic I: A book about language and therapy. Science & Behaviour Books, Palo Alto.

Bandler, R. & Grinder, J (1979). Frogs into Princes: The introduction to Neuro-Linguistic Programming. Eden Grove Editions (1990).

Bennett, T. (2013). Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it, Routledge.

Coffield F., Moseley D., Hall, E. & Ecclestone, K (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.

Gardner, H. (1977). The Shattered Mind: The person after brain damage. Routledge & Kegan Paul.

Gardner, H. (1983). Frames of Mind: The theory of multiple intelligences. Fontana (1993).

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2009). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.

Witkowski, T (2010). Thirty-Five Years of Research on Neuro-Linguistic Programming.
NLP Research Data Base. State of the Art or Pseudoscientific Decoration? Polish Psychological Bulletin 41, 58-66.

the view from the signpost: learning styles

Discovering that some popular teaching approaches (Learning Styles, Brain Gym, Thinking Hats) have less-than-robust support from research has prompted teachers to pay more attention to the evidence for their classroom practice. Teachers don’t have much time to plough through complex research findings. What they want are summaries, signposts to point them in the right direction. But research is a work in progress. Findings are often not clear-cut but contradictory, inconclusive or ambiguous. So it’s not surprising that some signposts – ‘do use synthetic phonics, ‘don’t use Learning Styles’ – often spark heated discussion. The discussions often cover the same ground. In this post, I want look at some recurring issues in debates about synthetic phonics (SP) and Learning Styles (LS).

Take-home messages

Synthetic phonics is an approach to teaching reading that begins by developing children’s awareness of the phonemes within words, links the phonemes with corresponding graphemes, and uses the grapheme-phoneme correspondence to decode the written word. Overall, the reading acquisition research suggests that SP is the most efficient method we’ve found to date of teaching reading. So the take-home message is ‘do use synthetic phonics’.

What most teachers mean by Learning Styles is a specific model developed by Fleming and Mills (1992) derived from the theory behind Neuro-Linguistic Programming. It proposes that students learn better in their preferred sensory modality – visual, aural, read/write or kinaesthetic (VARK). (The modalities are often reduced in practice to VAK – visual, auditory and kinaesthetic.) But ‘learning styles’ is also a generic term for a multitude of instructional models used in education and training. Coffield et al (2004) identified no fewer than 71 of them. Coffield et al’s evaluation didn’t include the VARK or VAK models, but a close relative – Dunn and Dunn’s Learning Styles Questionnaire – didn’t fare too well when tested against Coffield’s reliability and validity criteria (p.139). Other models did better, including Allinson and Hayes Cognitive Styles Index that met all the criteria.

The take-home message for teachers from Coffield and other reviews is that given the variation in validity and reliability between learning styles models, it isn’t worth teachers investing time and effort in using any learning style approach to teaching. So far so good. If the take-home messages are clear, why the heated debate?

Lumping and splitting

‘Lumping’ and ‘splitting’ refer to different ways in which people categorise specific examples; they’re terms used mainly by taxonomists. ‘Lumpers’ tend to use broad categories and ‘splitters’ narrow ones. Synthetic phonics proponents rightly emphasise precision in the way systematic, synthetic phonics (SSP) is used to teach children to read. SSP is a systematic not a scattergun approach, it involves building up words from phonemes not breaking words down to phonemes, and developing phonemic awareness rather than looking at pictures or word shapes. SSP advocates are ‘splitters’ extraordinaire – in respect of SSP practice at least. Learning styles critics, by contrast, tend to lump all learning styles together, often failing to make a distinction between LS models.

SP proponents also become ‘lumpers’ where other approaches to reading acquisition are concerned. Whether it’s whole language, whole words or mixed methods, it makes no difference… it’s not SSP. And both SSP proponents and LS critics are often ‘lumpers’ in respect of the research behind the particular take-home message they’ve embraced so enthusiastically. So what? Why does lumping or splitting matter?

Lumping all non-SSP reading methods together or all learning styles models together matters because the take-home messages from the research are merely signposts pointing busy practitioners in the right direction, not detailed maps of the territory. The signposts tell us very little about the research itself. Peering at the research through the spectacles of the take-home message is likely to produce a distorted view.

The distorted view from the signpost

The research process consists of several stages, including those illustrated in the diagram below.
theory to application
Each stage might include several elements. Some of the elements might eventually emerge as robust (green), others might be turn out to be flawed (red). The point of the research is to find out which is which. At any given time it will probably be unclear whether some components at each stage of the research process are flawed or not. Uncertainty is an integral part of scientific research. The history of science is littered with findings initially dismissed as rubbish that later ushered in a sea-change in thinking, and others that have been greeted as the Next Big Thing that have since been consigned to the trash.

Some of the SP and LS research findings have been contradictory, inconclusive or ambiguous. That’s par for the course. Despite the contradictions, unclear results and ambiguities, there might be general agreement about which way the signposts for practitioners are pointing. That doesn’t mean it’s OK to work backwards from the signpost and make assumptions about the research. In the diagram, there’s enough uncertainty in the research findings to put a question mark over all potential applications. But all that question mark itself tells us is that there’s uncertainty involved. A minor tweak to the theory could explain the contradictory, inconclusive or ambiguous results and then it would be green lights all the way down.

But why does that matter to teachers? It’s the signposts that are important to them, not the finer points of research methodology or statistical analysis. It matters because some of the teachers who are the most committed supporters of SP or critics of LS are also the most vociferous advocates of evidence-based practice.

Evidence: contradictory, inconclusive or ambiguous?

Decades of research into reading acquisition broadly support the use of synthetic phonics for teaching reading, although many of the research findings aren’t unambiguous. One example is the study carried out in Clackmannanshire by Rhona Johnston and Joyce Watson. The overall conclusion is that SP leads to big improvements in reading and spelling, but closer inspection of the results shows they are not entirely clear-cut, and the study’s methodology has been criticised. But you’re unlikely to know that if you rely on SP advocates for an evaluation of the evidence. Personally, I can’t see a problem with saying ‘the research evidence broadly supports the use of synthetic phonics for teaching reading’ and leaving it at that.

The evidence relating to learning styles models is also not watertight, although in this case, it suggests they are mostly not effective. But again, you’re unlikely to find out about the ambiguities from learning styles critics. Tom Bennett, for example, doesn’t like learning styles – as he makes abundantly clear in a TES blog post entitled “Zombie bølløcks: World War VAK isn’t over yet.”

The post is about the VAK Learning Styles model. But in the ‘Voodoo teaching’ chapter of his book Teacher Proof, Bennett concludes about learning styles in general “it is of course, complete rubbish as far as I can see” (p.147). Then he hedges his bets in a footnote; “IN MY OPINION”.

Tom’s an influential figure – government behaviour adviser, driving force behind the ResearchEd conferences and a frequent commentator on educational issues in the press. He’s entitled to lump together all learning styles models if he wants to and to write colourful opinion pieces about them if he gets the chance, but presenting the evidence in terms of his opinion, and missing out evidence that doesn’t support his opinion is misleading. It’s also at odds with an evidence-based approach to practice. Saying there’s mixed evidence for the effectiveness of learning styles models doesn’t take more words than implying there’s none.

So why don’t supporters in the case of SP, or critics in the case of LS, say what the evidence says, rather than what the signposts say? I’d hazard a guess it’s because they’re worried that teachers will see contradictory, inconclusive or ambiguous evidence as providing a loophole that gives them licence to carry on with their pet pedagogies regardless. But the risk of looking at the signpost rather than the evidence is that one set of dominant opinions will be replaced by another.

In the next few posts, I’ll be looking more closely at the learning styles evidence and what some prominent critics have to say about it.

Note:

David Didau responded to my thoughts about signposts and learning styles on his blog. Our discussion in the comments section revealed that he and I use the term ‘evidence’ to mean different things. Using words in different ways. Could explain everything.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Fleming, N. & Mills, C. (1992). Not another invention, rather a catalyst for reflection. To Improve the Academy. Professional and Organizational Development Network in Higher Education. Paper 246.

is systematic synthetic phonics generating neuromyths?

A recent Twitter discussion about systematic synthetic phonics (SSP) was sparked by a note to parents of children in a reception class, advising them what to do if their children got stuck on a word when reading. The first suggestion was “encourage them to sound out unfamiliar words in units of sound (e.g. ch/sh/ai/ea) and to try to blend them”. If that failed “can they use the pictures for any clues?” Two other strategies followed. The ensuing discussion began by questioning the wisdom of using pictures for clues and then went off at many tangents – not uncommon in conversations about SSP.
richard adams reading clues

SSP proponents are, rightly, keen on evidence. The body of evidence supporting SSP is convincing but it’s not the easiest to locate; much of the research predates the internet by decades or is behind a paywall. References are often to books, magazine articles or anecdote; not to be discounted, but not what usually passes for research. As a consequence it’s quite a challenge to build up an overview of the evidence for SSP that’s free of speculation, misunderstandings and theory that’s been superseded. The tangents that came up in this particular discussion are, I suggest, the result of assuming that if something is true for SSP in particular it must also be true for reading, perception, development or biology in general. Here are some of the inferences that came up in the discussion.

You can’t guess a word from a picture
Children’s books are renowned for their illustrations. Good illustrations can support or extend the information in the text, showing readers what a chalet, a mountain stream or a pine tree looks like, for example. Author and artist usually have detailed discussions about illustrations to ensure that the book forms an integrated whole and is not just a text with embellishments.

If the child is learning to read, pictures can serve to focus attention (which could be wandering anywhere) on the content of the text and can have a weak priming effect, increasing the likelihood of the child accessing relevant words. If the picture shows someone climbing a mountain path in the snow, the text is unlikely to contain words about sun, sand and ice-creams.

I understand why SSP proponents object to the child being instructed to guess a particular word by looking at a picture; the guess is likely to be wrong and the child distracted from decoding the word. But some teachers don’t seem to be keen on illustrations per se. As one teacher put it “often superficial time consuming detract from learning”.

Cues are clues are guesswork
The note to parents referred to ‘clues’ in the pictures. One contributor cited a blogpost that claimed “with ‘mixed methods’ eyes jump around looking for cues to guess from”. Clues and cues are often used interchangeably in discussions about phonics on social media. That’s understandable; the words have similar meanings and a slip on the keyboard can transform one into the other. But in a discussion about reading methods, the distinction between guessing, clues and cues is an important one.

Guessing involves drawing conclusions in the absence of enough information to give you a good chance of being right; it’s haphazard, speculative. A clue is a piece of information that points you in a particular direction. A cue has a more specific meaning depending on context; e.g. theatrical cues, social cues, sensory cues. In reading research, a cue is a piece of information about something the observer is attending to, or a property of a thing to be attended to. It could be the beginning sound or end letter of a word, or an image representing the word. Cues are directly related to the matter in hand, clues are more indirectly related, guessing is a stab in the dark.

The distinction is important because if teachers are using the terms cue and clue interchangeably and assuming they both involve guessing there’s a risk they’ll mistakenly dismiss references to ‘cues’ in reading research as guessing or clues, which they are not.

Reading isn’t natural
Another distinction that came up in the discussion was the idea of natural vs. non-natural behaviours. One argument for children needing to be actively taught to read rather than picking it up as they go along is that reading, unlike walking and talking, isn’t a ‘natural’ skill. The argument goes that reading is a relatively recent technological development so we couldn’t possibly have evolved mechanisms for reading in the same way as we have evolved mechanisms for walking and talking. One proponent of this idea is Diane McGuinness, an influential figure in the world of synthetic phonics.

The argument rests on three assumptions. The first is that we have evolved specific mechanisms for walking and talking but not for reading. The ideas that evolution has an aim or purpose and that if everybody does something we must have evolved a dedicated mechanism to do it, are strongly contested by those who argue instead that we can do what our anatomy and physiology enable us to do (see arguments over Chomsky’s linguistic theory). But you wouldn’t know about that long-standing controversy from reading McGuinness’s books or comments from SSP proponents.

The second assumption is that children learn to walk and talk without much effort or input from others. One teacher called the natural/non-natural distinction “pretty damn obvious”. But sometimes the pretty damn obvious isn’t quite so obvious when you look at what’s actually going on. By the time they start school, the average child will have rehearsed walking and talking for thousands of hours. And most toddlers experience a considerable input from others when developing their walking and talking skills even if they don’t have what one contributor referred to as a “WEIRDo Western mother”. Children who’ve experienced extreme neglect (such as those raised in the notorious Romanian orphanages) tend to show significant developmental delays.

The third assumption is that learning to use technological developments requires direct instruction. Whether it does or not depends on the complexity of the task. Pointy sticks and heavy stones are technologies used in foraging and hunting, but most small children can figure out for themselves how to use them – as do chimps and crows. Is the use of sticks and stones by crows, chimps or hunter-gatherers natural or non-natural? A bicycle is a man-made technology more complex than sticks and stones, but most people are able to figure out how to ride a bike simply by watching others do it, even if a bit of practice is needed before they can do it themselves. Is learning to ride a bike with a bit of support from your mum or dad natural or non-natural?

Reading English is a more complex task than riding a bike because of the number of letter-sound correspondences. You’d need a fair amount of watching and listening to written language being read aloud to be able to read for yourself. And you’d need considerable instruction and practice before being able to fly a fighter jet because the technology is massively more complex than that involved in bicycles and alphabetic scripts.

One teacher asked “are you really going to go for the continuum fallacy here?” No idea why he considers a continuum a fallacy. In the natural/non-natural distinction used by SSP proponents there are three continua involved;

• the complexity of the task
• the length of rehearsal time required to master the task, and
• the extent of input from others that’s required.

Some children learn to read simply by being read to, reading for themselves and asking for help with words they don’t recognise. But because reading is a complex task, for most children learning to read by immersion like that would take thousands of hours of rehearsal. It makes far more sense to cut to the chase and use explicit instruction. In principle, learning to fly a fighter jet would be possible through trial-and-error, but it would be a stupidly costly approach to training pilots.

Technology is non-biological
I was told by several teachers that reading, riding a bike and flying an aircraft weren’t biological functions. I fail to see how they can’t be, since all involve human beings using their brain and body. It then occurred to me that the teachers are equating ‘biological’ with ‘natural’ or with the human body alone. In other words, if you acquire a skill that involves only body parts (e.g. walking or talking) it’s biological. If it involves anything other than a body part it’s not biological. Not sure where that leaves hunting with wooden spears, making baskets or weaving woolen fabric using a wooden loom and shuttle.

Teaching and learning are interchangeable
Another tangent was whether or not learning is involved in sleeping, eating and drinking. I contended that it is; newborns do not sleep, eat or drink in the same way as most of them will be sleeping, eating or drinking nine months later. One teacher kept telling me they don’t need to be taught to do those things. I can see why teachers often conflate teaching and learning, but they are not two sides of the same coin. You can teach children things but they might fail to learn them. And children can learn things that nobody has taught them. It’s debatable whether or not parents shaping a baby’s sleeping routine, spoon feeding them or giving them a sippy cup instead of a bottle count as teaching, but it’s pretty clear there’s a lot of learning going on.

What’s true for most is true for all
I was also told by one teacher that all babies crawl (an assertion he later modified) and by a school governor that they can all suckle (an assertion that wasn’t modified). Sweeping generalisations like this coming from people working in education is worrying. Children vary. They vary a lot. Even if only 0.1% of children do or don’t do something, that would involve 8 000 children in English schools. Some and most are not all or none and teachers of all people should be aware of that.

A core factor in children learning to read is the complexity of the task. If the task is a complex one, like reading, most children are likely to learn more quickly and effectively if you teach them explicitly. You can’t infer from that that all children are the same, they all learn in the same way or that teaching and learning are two sides of the same coin. Nor can you infer from a tenuous argument used to justify the use of SSP that distinctions between natural and non-natural or biological and technological are clear, obvious, valid or helpful. The evidence that supports SSP is the evidence that supports SSP. It doesn’t provide a general theory for language, education or human development.

synthetic phonics, dyslexia and natural learning

Too intense a focus on the virtues of synthetic phonics (SP) can, it seems, result in related issues getting a bit blurred. I discovered that some whole language supporters do appear to have been ideologically motivated but that the whole language approach didn’t originate in ideology. And as far as I can tell we don’t know if SP can reduce adult functional illiteracy rates. But I wouldn’t have known either of those things from the way SP is framed by its supporters. SP proponents also make claims about how the brain is involved in reading. In this post I’ll look at two of them; dyslexia and natural learning.

Dyslexia

Dyslexia started life as a descriptive label for the reading difficulties adults can develop due to brain damage caused by a stroke or head injury. Some children were observed to have similar reading difficulties despite otherwise normal development. The adults’ dyslexia was acquired (they’d previously been able to read) but the children’s dyslexia was developmental (they’d never learned to read). The most obvious conclusion was that the children also had brain damage – but in the early 20th century when the research started in earnest there was no easy way to determine that.

Medically, developmental dyslexia is still only a descriptive label meaning ‘reading difficulties’ (causes unknown, might/might not be biological, might vary from child to child). However, dyslexia is now also used to denote a supposed medical condition that causes reading difficulties. This new usage is something that Diane McGuinness complains about in Why Children Don’t Learn to Read.

I completely agree with McGuinness that this use isn’t justified and has led to confusion and unintended and unwanted outcomes. But I think she muddies the water further by peppering her discussion of dyslexia (pp. 132-140) with debatable assertions such as:

“We call complex human traits ‘talents’”.

“Normal variation is on a continuum but people working from a medical or clinical model tend to think in dichotomies…”.

“Reading is definitely not a property of the human brain”.

“If reading is a biological property of the brain, transmitted genetically, then this must have occurred by Lamarckian evolution.”

Why debatable? Because complex human traits are not necessarily ‘talents’; clinicians tend to be more aware of normal variation than most people; reading must be a ‘property of the brain’ if we need a brain to read; and the research McGuinness refers to didn’t claim that ‘reading’ was transmitted genetically.

I can understand why McGuinness might be trying to move away from the idea that reading difficulties are caused by a biological impairment that we can’t fix. After all, the research suggests SP can improve the poor phonological awareness that’s strongly associated with reading difficulties. I get the distinct impression, however, that she’s uneasy with the whole idea of reading difficulties having biological causes. She concedes that phonological processing might be inherited (p.140) but then denies that a weakness in discriminating phonemes could be due to organic brain damage. She’s right that brain scans had revealed no structural brain differences between dyslexics and good readers. And in scans that show functional variations, the ability to read might be a cause, rather than an effect.

But as McGuinness herself points out reading is a complex skill involving many brain areas, and biological mechanisms tend to vary between individuals. In a complex biological process there’s a lot of scope for variation. Poor phonological awareness might be a significant factor, but it might not be the only factor. A child with poor phonological awareness plus visual processing impairments plus limited working memory capacity plus slow processing speed – all factors known to be associated with reading difficulties – would be unlikely to find those difficulties eliminated by SP alone. The risk in conceding that reading difficulties might have biological origins is that using teaching methods to remediate them might then called into question – just what McGuinness doesn’t want to happen, and for good reason.

Natural and unnatural abilities

McGuinness’s view of the role of biology in reading seems to be derived from her ideas about the origin of skills. She says;

It is the natural abilities of people that are transmitted genetically, not unnatural abilities that depend upon instruction and involve the integration of many subskills”. (p.140, emphasis McGuinness)

This is a distinction often made by SP proponents. I’ve been told that children don’t need to be taught to walk or talk because these abilities are natural and so develop instinctively and effortlessly. Written language, in contrast, is a recent man-made invention; there hasn’t been time to evolve a natural mechanism for reading, so we need to be taught how to do it and have to work hard to master it. Steven Pinker, who wrote the foreword to Why Children Can’t Read seems to agree. He says “More than a century ago, Charles Darwin got it right: language is a human instinct, but written language is not” (p.ix).

Although that’s a plausible model, what Pinker and McGuinness fail to mention is that it’s also a controversial one. The part played by nature and nurture in the development of language (and other abilities) has been the subject of heated debate for decades. The reason for the debate is that the relevant research findings can be interpreted in different ways. McGuinness is entitled to her interpretation but it’s disingenuous in a book aimed at a general readership not to tell readers that other researchers would disagree.

Research evidence suggests that the natural/unnatural skills model has got it wrong. The same natural/unnatural distinction was made recently in the case of part of the brain called the fusiform gyrus. In the fusiform gyrus, visual information about objects is categorised. Different types of objects, such as faces, places and small items like tools, have their own dedicated locations. Because those types of objects are naturally occurring, researchers initially thought their dedicated locations might be hard-wired.

But there’s also word recognition area. And in experts, the faces area is also used for cars, chess positions, and specially invented items called greebles. To become an expert in any of those things you require some instruction – you’d need to learn the rules of chess or the names of cars or greebles. But your visual system can still learn to accurately recognise, discriminate between and categorise many thousands of items like faces, places, tools, cars, chess positions and greebles simply through hours and hours of visual exposure.

Practice makes perfect

What claimants for ‘natural’ skills also tend to overlook is how much rehearsal goes into them. Most parents don’t actively teach children to talk, but babies hear and rehearse speech for many months before they can say recognisable words. Most parents don’t teach toddlers to walk, but it takes young children years to become fully stable on their feet despite hours of daily practice.

There’s no evidence that as far as the brain is concerned there’s any difference between ‘natural’ and ‘unnatural’ knowledge and skills. How much instruction and practice knowledge or skills require will depend on their transparency and complexity. Walking and bike-riding are pretty transparent; you can see what’s involved by watching other people. But they take a while to learn because of the complexity of the motor-co-ordination and balance involved. Speech and reading are less transparent and more complex than walking and bike-riding, so take much longer to master. But some children require intensive instruction in order to learn to speak, and many children learn to read with minimal input from adults. The natural/unnatural distinction is a false one and it’s as unhelpful as assuming that reading difficulties are caused by ‘dyslexia’.

Multiple causes

What underpins SP proponents’ reluctance to admit biological factors as causes for reading difficulties is, I suspect, an error often made when assessing cause and effect. It’s an easy one to make, but one that people advocating changes to public policy need to be aware of.

Let’s say for the sake of argument that we know, for sure, that reading difficulties have three major causes, A, B and C. The one that occurs most often is A. We can confidently predict that children showing A will have reading difficulties. What we can’t say, without further investigation, is whether a particular child’s reading difficulties are due to A. Or if A is involved, that it’s the only cause.

We know that poor phonological awareness is frequently associated with reading difficulties. Because SP trains children to be aware of phonological features in speech, and because that training improves word reading and spelling, it’s a safe bet that poor phonological awareness is also a cause of reading difficulties. But because reading is a complex skill, there are many possible causes for reading difficulties. We can’t assume that poor phonological awareness is the only cause, or that it’s a cause in all cases.

The evidence that SP improves children’s decoding ability is persuasive. However, the evidence also suggests that 12% – 15% of children will still struggle to learn to decode using SP. And that around 15% of children will struggle with reading comprehension. Having a method of reading instruction that works for most children is great, but education should benefit all children, and since the minority of children who struggle are the ones people keep complaining about, we need to pay attention to what causes reading difficulties for those children – as individuals. In education, one size might fit most, but it doesn’t fit all.

Reference

McGuinness, D. (1998). Why Children Can’t Read and What We Can Do About It. Penguin.

the myth of the neuromyth

In 1999, at the end of the ‘decade of the brain’ the Museum of Life in Rio de Janeiro was planning a series of events aimed at enhancing the general public’s understanding of brain research. As part of the planning process, a survey was undertaken to find out what the population of Rio de Janeiro, especially students, actually understood about the brain. The findings are set out in a paper by Suzana Herculano-Houzel entitled:

Do you know your brain? A survey on public neuroscience literacy at the closing of the decade of the brain”.

Respondents were asked about their opinion (yes, no, or don’t know; Y/N/DK) as to whether each of 95 statements about the brain was correct or not. 83 statements were directly related to brain research and 12 indirectly related. The general public scored around 50% correct responses to the items. Not surprisingly, the percentage of correct scores increased with number of years in education, and to some extent, with the amount of reading respondents did – of books, science magazines and newspapers.

The statements in the survey were, of necessity, short. Examples include;

  • “we use our brains 24 hours a day”
  • “when a brain region is damaged and dies, other parts of the brain can take up its function” and
  • “we usually utilize only 10% of our brain”.

A core problem with condensing complex, uncertain or contentious research findings into single sentence assertions is that the research findings often can’t be accurately summarised in short statements. Herculano-Houzel addressed this problem by asking neuroscientists to respond to the survey. 35 replied. She took 70% agreement amongst them as the threshold for determining the correctness of the assertions. 56 items met this criterion.

The neuroscience literacy of trainee teachers

A decade later, some items from the Herculano-Houzel survey were used by Paul Howard-Jones and colleagues at the Graduate School of Education at the University of Bristol, England, to explore the neuroscience literacy of trainee teachers. As the authors point out, there is considerable concern about the prevalence of ‘neuromyths’ in education – citing an OECD report published in 2002 that defined a neuromyth as a “misconception generated by a misunderstanding, a misreading or a misquoting of facts scientifically established”. (A later volume lists some common neuromyths). Howard-Jones et al cite the Visual, Auditory and Kinaesthetic (VAK) Learning Style, left-brain/right-brain learning preferences and brain gym models and common perceptions of the effect of water, sugar and omega-3 oils in learning, as examples.

The authors gathered responses from 158 graduate trainee teachers coming to the end of a PGCE course, to 38 assertions – 15 correct, 16 incorrect, and 7 open to subjective opinion. 16 of the assertions were adapted from the Herculano-Houzel study and the remainder derived from concepts identified in preliminary interviews and previous research by the authors. Participants were asked whether the assertions reflected their opinions and to respond Y/N/DK. At first glance, the two studies look alike, and indeed Howard-Jones et al’s trainee teachers’ responses were broadly comparable to those of Herculano-Houzel’s graduates. But there are some important differences between the two that need to be borne in mind in respect of Howard-Jones et al’s conclusions.

ambiguity

As I see it, there are two sources of ambiguity in the Herculano-Houzel survey. One is the challenge of condensing research findings accurately into a single sentence assertion. Herculano-Houzel addressed this by noting the level of agreement amongst neuroscientists. The other is that Y/N/DK responses can fail to represent the degree of respondents’ agreement with single sentence assertions that represent complex, uncertain or contentious research findings. Respondents might only slightly agree or disagree with a statement, or might reply ‘don’t know’ meaning;

  • I don’t know’
  • Scientists don’t know’
  • ‘Neither yes nor no is accurate’ or
  • ‘I know what my opinion is but I don’t know whether there’s scientific evidence to support it’.

This ambiguity in responses didn’t matter too much in the Brazilian study because its purpose was to get a broad overview of what the public knew or didn’t know about the brain. The public understanding of the brain is not unimportant, but it’s not as important as how teachers understand the brain, since a teacher could, directly or indirectly, pass on a misunderstanding about brain function to literally thousands of students.

Howard-Jones et al discuss in some detail the possibility that respondents might have interpreted assertions in different ways, or that there might have been differences in understanding behind the responses. I think this could have been addressed by designing the questionnaire differently, because restricting teachers’ responses to Y/N/DK might not produce a sufficiently accurate picture of what teachers know or don’t know.

For example, a graduate who’d studied neuroscience might be aware of exceptions to an assertion that was broadly true, or might interpret the wording of the assertion differently to an arts graduate who knew very little about biology. Asking the trainee teachers to use a scale to express their degree of agreement with the statements would have been one solution. They could also have been asked to indicate how much they knew about the relevant scientific evidence.

Neuromyths in education: Prevalence and predictors of misconceptions among teachers

Earlier this year, a paper entitled “Neuromyths in education: Prevalence and predictors of misconceptions among teachers” describing a similar study was published by Dekker et al, with Howard-Jones as a co-author. This study looked at the prevalence and predictors of neuromyths amongst teachers in the UK and the Netherlands. The survey contained 32 statements about the brain and its influence on learning. 15 were ‘educational neuromyths’ derived from the 2002 OECD publication and the Howard-Jones study, and the other 17 were “general assertions about the brain”.

Respondents were asked to say whether the statements were correct, incorrect or they didn’t know (C/I/DK). Dekker et al found a slightly higher level (70%) of ‘correct’ responses to general assertions about the brain than previous studies had found amongst graduates, and a higher level of ‘correct’ perceptions of neuromyth statements (51%) than Howard-Jones et al (34%). What Dekker et al also found was that, contrary to the previous studies, a greater general knowledge about the brain did not protect teachers from believing neuromyths.

This finding is not only counterintuitive, but it runs counter to the findings of the previous studies on which the Dekker et al study was based. If Dekker et al had used the same questionnaire as Herculano-Houzel, their findings would raise some interesting questions. Were the differences due to cultural, geographical or linguistic factors? Or was this a finding peculiar to teachers? But the Dekker et al questionnaire wasn’t identical to the Herculano-Houzel one. This suggests that the questionnaire itself could have contributed to the counterintuitive findings. Two obvious differences between Dekker et al and the previous studies are the ways ambiguity was tackled in relation to the statements and responses.

what is a neuromyth?
The different researchers addressed statement ambiguity in different ways: Herculano-Houzel measured agreement on the assertions amongst neuroscientists; Howard-Jones et al discussed in some detail the possible variations in interpretation of specific statements – the OECD chapter from which some of Dekker et al’s neuromyths were derived explores them in some depth. But I could find no indication in the Dekker et al paper that ambiguity of the statements or of the responses had been addressed at all. Nor could I find an explanation as to how the wording of the statements had been chosen.

Most of the statements that Dekker et al derived from Herculano-Houzel scored high levels of agreement amongst neuroscientists. But there were some exceptions, for example;

  • The ‘correct’ statement (14) “when a brain region is damaged other parts of the brain can take up its function” scored just on the 70% threshold (24% of neuroscientists disagreed and 6% didn’t know)
  • The ‘incorrect’ statement (7) “we only use 10% of our brain” scored below the agreement threshold, with only 68% of neuroscientists disagreeing with it (6% agreed and 26% didn’t know).
  • In addition, the wording of the statements was changed between surveys; Herculano-Houzel has “when a brain region is damaged and dies, other parts of the brain can take up its function” and  “we usually utilize only 10% of our brain”, respectively.

We don’t know whether the variation in neuroscientists’ levels of agreement resulted from debatable research findings or because of differences in interpretation of the wording. If the latter, it’s possible that the Dekker et al results were affected by respondents’ interpretations.

Some of Dekker et al’s general statements are open to interpretation too. Item 3 “boys have bigger brains than girls” is true if you compare the means of brain size for boys and girls of the same age. However, the distributions of individual measures overlap, which means that not all boys have bigger brains than girls of the same age, as you can see from the graphs below, taken from Lenroot et al (2007).

nihms27353f1

Scatterplot of longitudinal measurements of total brain volume for males (N = 475 scans, shown in dark blue) and females (N = 354 scans, shown in red).

nihms27353f3

Gray matter subdivisions. (a) Frontal lobe, (b) Parietal lobe, (c) Temporal lobe, (d) Occipital lobe

Then there’s item 12, which says “there are critical periods in childhood after which certain things can no longer be learned”. The research suggests that there are indeed critical periods for some sensory functions – children with certain eye defects corrected after a certain age never develop normal vision, and children deprived of early language input have failed to develop normal speech. This implies that whether the statement is ‘correct’ or not depends on what is meant by ‘certain things’ and ‘learned’. Then take item 14 which claims the statement “learning is not due to the addition of new cells to the brain” is ‘incorrect’. This assertion doesn’t appear to be incorrect for the hippocampus. Admittedly much of the relevant research has taken place since this item appeared in the Herculano-Houzel survey, but findings had been around for a decade before the Dekker et al study and was a point raised by Howard-Jones et al.

In addition, some statements differed only in respect of some fairly fine-grained distinctions. Item 15 says “individuals learn better when they receive information in their preferred learning style (e.g., auditory, visual, kinesthetic)” and is deemed ‘incorrect’. But item 27 “individual learners show preferences for the mode in which they receive information (e.g., visual, auditory, kinesthetic)” is deemed ‘correct’.

Both items distinguish generic preferred learning styles (mine happens to consist of reading new material whilst propped up in bed, followed by mulling it over while I go for a walk), from a specific Learning Styles model derived from Neuro-Linguistic Programming theory involving three named sensory domains. Respondents who are aware of criticisms of the VAK Learning Styles model might justifiably question whether individual learners actually do show preferences for the mode in which they receive information; what about people who learn best from tv documentaries for example? Audio-visual communication is itself a mode of information transmission, but it involves two sensory modalities. And what about constraints imposed by the learning objective itself? Most people would prefer to learn to drive or swim by receiving information kinesthetically, whatever their usual preferences, because it’s extremely difficult to learn to do either using only visual and/or auditory modalities.

The upshot is that at least 7 of Dekker et al’s 32 statements contain quite high levels of ambiguity, either due to the nature of the relevant research findings, or to the wording of the assertions. It’s quite feasible that Dekker et al’s counterintuitive finding that general knowledge about the brain didn’t protect teachers against believing neuromyths, might actually be an experimental artefact.

neuromyths: correct or incorrect, true or false?

The second difference was in the way response ambiguity was dealt with. Herculano-Houzel and Howard-Jones et al used subjective agreement (Y/N/DK). Dekker et al used objective ‘correctness’ (C/I/DK) – which isn’t the same thing.

I came across the Dekker et al study via Kevin Wheldall’s blog Notes from Harefield. When responding to my comments about ambiguity in survey items, he noted that the Dekker et al statements were presented as an online quiz on Leah Tomlin’s Education Elf blog. The quiz differs from the Dekker et al survey in that it doesn’t have a ‘don’t know’ option. In other words, in the quiz itself there’s no acknowledgement of any possible ambiguity in the assertions – although several people who have completed it have commented on ambiguities in the statements. The Education Elf discusses the study in more detail here.

Following the trail of these studies has been a fascinating demonstration of what this blog is named after – logical incrementalism. The research questions have shifted from the degree of ‘neuroscience literacy’ of the public to the prevalence of ‘neuromyths’ amongst teachers. The measure of the ‘correctness’ of statements changed from degree of agreement on a 100 point scale amongst neuroscientists, to statements being categorized as either ‘correct’ or incorrect’ with no explanation of the criteria for that categorization, or, if one includes the Education Elf blog survey, categorized as ‘true’ or ‘false’ with no explanation, despite an extensive discussion in the literature of the nature of the misconceptions, misunderstandings, misreadings and misquotings involved and respondents drawing attention to ambiguities that might have affected their responses.

There are obvious advantages in re-using survey items developed in previous studies. Many methodological issues would have been addressed in the initial survey design, and any residual weaknesses would have become apparent from the results. However there are risks involved in making incremental changes to previous questionnaires unless attention is paid to the parameters that guided their development. In this case, the criterion for ‘correctness’ has been largely overlooked, as has the ambiguity that’s inevitably an outcome of asking for Y/N/DK responses.

There’s no question that misconceptions, misunderstandings, misreadings and misquotings of the neuroscience literature have contributed to the prevalence of neuromyths amongst the general public and amongst teachers. Teachers might indeed be especially susceptible because findings from neuroscience are directly applicable to their work and because many who haven’t studied biological sciences are likely to rely on simplified sources for information about the brain.

Having said that, I’d suggest that labelling complex, uncertain or contentious research findings as either correct or incorrect, true or false, facts or myths, is what what got us into this mess in the first place. Clearly teachers need more, and better, information about the brain, but some basic biology might prove more useful than putting a tick or cross next to oversimplified ideas.

References

Dekker, S, Lee, NC, Howard-Jones, P & Jolles, J (2012). Neuromyths in Education: Prevalence and Predictors of Misconceptions among Teachers. Frontiers in Psychology, 3, 429.

Herculano-Houzel, S (2002). Do you know your brain? A survey on public neuroscience literacy at the closing of the decade of the brain. Neuroscientist 8, 98–110.

Howard-Jones, P, Franey, Mashmoushi, R & Liao, YC (2009). The neuroscience literacy of trainee teachers. Paper presented at British Educational Research Association Annual Conference, Manchester.

Lenroot, RK, Gogtay, N, Greenstein, DK, Molloy, E, Wallace, GL, Clasen, LS, Blumenthal JD, Lerch, J, Zijdenbos, AP, Evans, AC, Thompson, PM & Giedd, JN (2007). Sexual dimorphism of brain developmental trajectories during childhood and adolescence. NeuroImage 36, 1065–1073.

Organisation for Economic Co-operation and Development (2002). Understanding the brain: Towards a new learning science. Paris: OECD.

Organisation for Economic Co-operation and Development (2007). Understanding the Brain:The birth of a learning science. Paris: OECD.

Edited for clarity 3/6/15 and 13/2/18.