We’re all different

We’re all different. Tiny variations in our DNA before and at conception. Our parents smoked/didn’t smoke, drank/didn’t drink, followed ideal/inadequate diets, enjoyed robust health or had food intolerances, allergies, viral infections. We were brought up in middle class suburbia/a tower block and attended inadequate/outstanding schools. All those factors contribute to who we are and what we are capable of achieving.

That variation, inherent in all biological organisms, is vital to our survival as a species. Without it, we couldn’t adapt to a changing environment or form communities that successfully protect us. The downside of that inherent variation is that some of us draw a short straw. Some variations mean we don’t make it through childhood, have lifelong health problems or die young. Or that we become what Katie Ashford, SENCO at Michaela Community School, Wembley calls the ‘weakest pupils‘.

Although the factors that contribute to our development aren’t, strictly speaking, random, they are so many and so varied, they might as well be random. That means that in a large population, the measurement of any characteristic affected by many factors – height, blood pressure, intelligence, reading ability – will form what’s known as a normal distribution; the familiar bell curve.

 The bell curve

If a particular characteristic forms a bell-shaped distribution, that allows us to make certain predictions about a large population. For that characteristic, 50% of the population will score above average and 50% below average; there will be relatively few people who are actually average. We’ll know that around 70% of the population will score fairly close to average, around 25% noticeably above or below it, and around 5% considerably higher or lower. That’s why medical reference ranges for various characteristics are based on the upper and lower measurements for 95% of the population; if your blood glucose levels or thyroid function is in the lowest or highest 2.5%, you’re likely to have a real problem, rather than a normal variation.

So in terms of general ability that means around 2.5% of the population will be in a position to decide whether they’d rather be an Olympic athlete, a brain surgeon or Prime Minister (or all three), whereas another 2.5% will find everyday life challenging.

What does a normal distribution mean for education? Educational attainment is affected by many causal factors, so by bizarre coincidence the attainment of 50% of school pupils is above average, and 50% below it. Around 20% of pupils have ‘special educational needs’ and around 2.5% will have educational needs that are significant enough to warrant a Statement of Special Educational Needs (recently replaced by Education Health and Care Plans).

Special educational needs

In 1978, the Warnock report pointed out that based on historical data, up to 20% of school pupils would probably have special educational needs at some point in their school career. ‘Special educational needs’ has a precise but relative meaning in law. It’s defined in terms of pupils requiring educational provision additional to or different from “educational facilities of a kind generally provided for children of the same age in schools within the area of the local education authority”.

Statements of SEN

The proportion of pupils with statements of SEN remained consistently at around 2.8% between 2005 and 2013 (after which the SEN system changed). http://www.publications.parliament.uk/pa/cm200506/cmselect/cmeduski/478/478i.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/225699/SFR30-2013_Text.pdf

It could be, of course, that these figures are an artifact of the system; anecdotal evidence suggests that some local authorities considered statutory assessments only for children who scored below the 2nd percentile on the WISC scale. Or it could be that measures of educational attainment do reflect the effectively random nature of the causes of educational attainment. In other words, a single measure of educational attainment can tell us whether a child’s attainment is unusually high or low; it can’t tell us why it’s unusually high or low. That often requires a bit of detective work.

If they can do it, anyone can

Some people feel uncomfortable with the idea of human populations having inherent variation; it smacks of determinism, excuses and complacency. So from time to time we read inspiring accounts of children in a school in a deprived inner city borough all reading fluently by the age of 6, or of the GCSE A*-C grades in a once failing school leaping from 30-60% in a year.  The implication is that if they can do it, anyone can. That’s a false assumption. Those things can happen in some schools. But they can’t happen in all schools simultaneously because of the variation inherent in human populations and because of the nature of life events (see previous post).

Children of differing abilities don’t distribute themselves neatly across schools. Some schools might have no children with statements and others might have many. Even if all circumstances were equal (which they’re not) clustering occurs within random distributions. This is a well-know phenomenon in epidemiology; towns with high numbers of cancer patients or hospitals with high numbers of unexpected deaths where no causal factors are identified tend to attract the attention of conspiracy theorists. This clustering illusion isn’t so well known in educational circles. It’s all too easy to assume that a school has few children with special educational needs because of the high quality of teaching, or that a school has many children with SEN because teaching is poor. Obviously, it’s more complicated than that.

What helps the weakest pupils?

According to Katie what ‘the weakest pupils’ need is “more focus, more rigour and more practice if they are to stand any chance of catching up with their peers”.   Katie goes on to unpack what she means. More focus means classrooms that aren’t chaotic. More rigour means expecting children to read challenging texts. More practice means practicing the things they can’t do, not the things they can.

Katie’s post is based on the assumption that the weakest pupils can and should ‘catch up with their peers.’ But it’s not clear what she means by that. Does she mean the school not needing a bottom set? All pupils attaining at least the national average for their age group? All pupils clustered at the high end of the attainment range?  She doesn’t say.

In a twitter discussion, Katie agreed that there is variation inherent in a population, but

katie ashford bell curve

I agree with Katie that there is often room for improvement, and that her focus, getting all children reading, can make a big difference, but improvement is likely to entail more than more focus, more rigour and more practice. In an earlier post Katie complains that “Too many people overcomplicate the role of SENCO”.   She sees her role as very simple: “I avoid pointless meetings, unnecessary paperwork and attending timewasting conferences as much as possible. Instead, I teach, organise interventions, spend lots of time with the pupils, and make sure teachers and support staff have everything they need to teach their kids really, really well.

Her approach sounds very sensible.  But she doesn’t say what the interventions are. Or what the teachers and support staff need to teach their kids really, really well. Or what meetings, paperwork and conferences she thinks are pointless, unnecessary and timewasting. Katie doesn’t say how many children at Michaela have statements of special needs or EHCPs – presumably some children have arrived there with these in place. Or what she does about the meetings and paperwork involved. Or how she tracks individual children’s progress. (I’m not suggesting that statements and EHCPs are the way to go – just that currently they’re part of the system and SENCOs have to deal with them).

What puzzled me most about Katie’s interventions was that they bore little resemblance to those I’ve seen other SENCOs implement in mainstream schools. It’s possible that they’ve overcomplicated their role.   It could be that the SENCOs I’ve watched at work are in primary schools and that at secondary level it’s different. Another explanation is that they’ve identified the root causes of children’s learning difficulties and have addressed them.

They’ve introduced visual timetables, taught all pupils Makaton, brought in speech and language therapists to train staff, installed the same flooring throughout the building to improve the mobility of children with cerebral palsy or epilepsy and integrated music, movement and drama into the curriculum. They’ve developed assistive technology for children with sensory impairments and built up an extensive, accessible school library that includes easy-to-read books with content suitable for older kids for poor readers and more challenging texts with content suitable for younger kids for good readers. They’ve planted gardens and attended forest schools regularly to develop motor and sensory skills.

As a child who read avidly  –  including Dickens – I can see how many hours of reading and working through chapters of Dickens could improve the reading ability of many children. But I’m still struggling to see how that would work for a kid whose epilepsy results in frequent ‘absences’ of attention or who has weak control of eye movements, an auditory processing impairment or very limited working memory capacity.

I’m aware that ‘special educational needs’ is a contentious label and that it’s often only applied because children aren’t being taught well, or taught appropriately. I’m utterly committed to the idea of every child being given the best possible education. I just don’t see any evidence to support the idea that catching up with one’s peers is a measure of educational excellence, or that practicing what you’re not good at is a better use of time than doing what you are good at.

Section 7 of the Education Act 1996 (based on the 1944 Education Act) frames a suitable education in terms of an individual child’s age, ability and aptitude, and any special educational needs they may have. The education system appears to have recently lost sight of the aptitude element. I fear that an emphasis on ‘catching up’ with one’s peers and on addressing weaknesses rather than developing strengths will inevitably result in many children seeing themselves as failing to jump an arbitrary hurdle, rather than as individuals with unique sets of talents and aptitudes who can play a useful and fulfilling role in society.

I’d be interested in Katie’s comments.

standardised testing: what’s it good for?

A campaign by parents to keep their children off school on Tuesday 3rd May as a protest against SATs prompted a Twitter discussion about the pros and cons of standardised tests. One teacher claimed that they’re important because they hold schools to account. I think that’s a misuse of standardised tests. First, because test results are a poor proxy measure of teaching quality. Second, good teaching (and hard work on the part of the student) are necessary but not sufficient conditions for good test performance. Third, using test results to hold schools to account overlooks the natural variation inherent in large populations.

test results as a measure of teaching quality

Tests such as the National Curriculum Tests (commonly known as SATs) GCSEs and A levels sample students’ recall and understanding of a particular body of knowledge – the KS2 curriculum, GCSE/A level course. The knowledge is sampled because testing the student’s knowledge of all the material in the course would be very time consuming and unwieldy. In other words, test results are a proxy for the student’s knowledge of the course material.

But the course material itself is a proxy for all that’s known about a particular topic. KS2 students learn basic principles about how atoms and molecules behave, GCSE and A level students learn about atomic theory in more detail, but Chemistry undergraduates complain that they have to then unlearn much of what they were taught earlier because it was the simplified version.  So test results are actually a second order proxy for the student’s knowledge of a particular topic.

Then factors other than the student’s knowledge impact on test results. The student might be unwell on the day of the test, or might have slept badly the night before. In the months before the test they might have been absent from school for weeks with glandular fever or their parents might have split up. In other words, test results are affected by factors other than teaching and learning; factors beyond the control of either the school or the student.  In other words, test results are a weak proxy for both the quality of teaching and the student’s knowledge.

good teaching and hard work are necessary but not sufficient for good test performance

There’s an asymmetry between the causes of high and low test results. It’s difficult to get a high test score without hard work on the part of the student and good teaching on the part of the school.   But there are many reasons why a student might get a low score despite hard work and good teaching.

That’s at the individual level. Similarly at the school level it’s safe to conclude that a school with consistently good results in national tests is doing its job properly, but it’s not safe to conclude that a school that doesn’t get consistently good results isn’t.

The education system has been plagued over the years by two false assumptions about student potential. Either that all students have the potential to get good test scores and that good teaching is the key determining factor, or that students from certain demographic groups won’t get good test scores however well they’re taught. In reality it’s more complicated than that, of course. Students from leafy suburbs are more likely to do well in tests for many reasons; even if they are taught badly, they have access to resources that can sometimes compensate for that. Students from the kind of housing estate that motivates Iain Duncan Smith are at a higher risk of adverse life events scuppering their chances of getting good test results no matter how good the teaching at their school. And the older they get, the more adverse life events they are likely to encounter.

So, test results are a pretty good first order proxy for a student’s knowledge of course material. They are a not-so-good second order proxy for a student’s knowledge of the topic the course material represents. And only a weak proxy for quality of teaching.

life is just one damn thing after another*

Those in favour of standardised testing often cite cases of particular schools in deprived areas that have achieved amazing outcomes against the odds. Every child can read by the age of six, or is fluent in French, or whatever.   The implication is that if one school can do it, all schools can. In principle, that’s true. In principle, all head teachers can be visionaries, all teachers can be excellent and all families can buy in to what the school wants to achieve.

But in practice life doesn’t work like that. Head teachers get sick, senior staff have to work part-time because of family commitments, local housing is unaffordable making recruitment a nightmare, or for many families school is just one more thing they can’t quite keep up with.

On top of that, human beings are biological organisms. Like all populations of biological organisms we show considerable variation due to our genes, our environment and interactions between the two. It might be possible to improve test performance across the education system, but there are limits to the improvement that’s possible. Clean water and good sanitation increase life expectancy, but life expectancy doesn’t go on increasing indefinitely once communities have access to clean water and sanitation. Expecting more than 50% of children in primary schools to perform above average simply shows a poor grasp of natural variation – and statistics.

standardised testing: what is it good for?

Standardised testing in primary schools makes sense. It samples children’s knowledge of key material. It allows schools to benchmark attainment. Standardised testing as a performance measure can alert schools to problems that are impacting on children’s learning.

However, the reasons for differences in students’ performance in standardised tests are many and varied. Performance will not improve unless the reasons for poor performance are addressed. Sometimes those reasons are complex and not within the schools’ remit. To address them local families might need better public services, better jobs or better housing – arguably not the core responsibility of a school. Poor teaching might not be involved at all.

However, successive governments haven’t used test results simply as broad indicators of whether a school is on track or whether there are problems that need to be addressed (not necessarily by the school), but as a proxy for teaching quality.  Test results have been used to set performance targets and determine funding, regardless of whether schools can control the factors involved.

This shows a poor understanding of performance management§, and it’s hardly surprising that the huge amounts of money and incessant policy changes thrown at the education system over recent decades have had little impact on the quality of education of the population as a whole.

Notes

*A quotation attributed to Elbert Hubbard, an American  writer who died when the Lusitania was sunk in 1915.

§ The best book I’ve read on performance management is a slim volume by Donald Wheeler called Understanding variation: The key to managing chaos.  A clearly written, step-by-step guide to figuring out if the variation you’ve spotted is within natural limits or not.  Lots of references to things like iron smelting and lumber yards, but still very relevant to schools.

 

 

 

 

phlogiston for beginners

Say “learning styles” to some teachers and you’re likely to get your head bitten off. Tom Bennett, the government’s behaviour tsar/guru/expert/advisor, really, really doesn’t like the idea of learning styles as he has made clear in a series of blogposts exploring the metaphor of the zombie.

I’ve come in for a bit of flak from various sources for suggesting that Bennett might have rather over-egged the learning styles pudding. I’ve been accused of not accepting the evidence, not admitting when I’m wrong, advancing neuromyths, being a learning styles advocate, being a closet learning styles advocate, and by implication not caring about the chiiiiiiiildren and being responsible for a metaphorical invasion by the undead. I refute all those accusations.

I’m still trying to figure out why learning styles have caused quite so much fuss. I understand that teachers might be a bit miffed about being told by schools to label children as visual, auditory or kinaesthetic (VAK) learners only to find there’s no evidence that they can be validly categorised in that way. But the time and money wasted on learning styles surely pales into insignificance next to the amounts squandered on the industry that’s sprung up around some questionable assessment methods, an SEN system that a Commons Select Committee pronounced not fit for purpose, or a teacher training system that for generations has failed to equip teachers with the skills they need to evaluate popular wheezes like VAK and brain gym.

And how many children have suffered actual harm as a result of being given a learning style label? I’m guessing very few compared to the number whose life has been blighted by failing the 11+, being labelled ‘educationally subnormal’, or more recent forms of failure to meet the often arbitrary requirements of the education system.  What is it about learning styles?

the learning styles neuromyth

I made the mistake of questioning some of the assumptions implicit in this article, notably that the concept of learning styles is a false belief, that it’s therefore a neuromyth and is somehow harmful in that it raises false hopes about transforming society.

My suggestion that the evidence for the learning styles concept is mixed rather than non-existent, that there are some issues around the idea of the neuromyth that need to be addressed, and that the VAK idea, even if wrong, probably isn’t the biggest hole in the education system’s bucket, was taken as a sign that my understanding of the scientific method must be flawed.

the evidence for aliens

One teacher (no names, no pack drill) said “This is like saying the ‘evidence for aliens is mixed’”.  No it isn’t. There are so many planets in the universe it’s highly unlikely Earth is the only one supporting life-forms, but so far, we have next to no evidence of their existence. But a learning style isn’t a life-form, it’s a construct, a label for phenomena that researchers have observed, and a pretty woolly label at that. It could refer to a wide range of very different phenomena, some of which are really out there, some of which are experimental artifacts, and some of which might be figments of a researchers’ imagination. It’s pointless speculating about whether learning styles exist or not because whether they exist or not depends on what you label as a ‘learning style’.  Life-forms are a different kettle of fish; there’s some debate around what constitutes a life-form and what doesn’t, but it’s far more tightly specified than any learning style ever has been.

you haven’t read everything

I was then chided for pointing out that Tom Bennett said he hadn’t finished reading the Coffield Learning Styles Review when (obviously) I hadn’t read everything there was to read on the subject either.   But I hadn’t  complained that Tom hadn’t read everything; I was pointing out that by his own admission in his book Teacher Proof he’d stopped reading before he got to the bit in the Coffield review which discusses learning styles models found to have validity and reliability, so it’s not surprising he came to a conclusion that Coffield didn’t support.

my evidence weighs more than your evidence

Then, “I’ve seen the tiny, tiny evidence you cite to support LS. Dwarfed by oceans of ‘no evidence’. There’s more evidence for ET than LS”. That’s not how the evaluation of scientific evidence works. It isn’t a case of putting the ‘for’ evidence in one pan of the scales and the ‘against’ evidence in the other and the heaviest evidence wins. On that basis, the heliocentric theories of Copernicus and Kepler would have never seen the light of day.
 
how about homeopathy?

Finally “How about homeopathy? Mixed evidence from studies.”   The implication is that if I’m not dismissing learning styles because the evidence is mixed, then I can’t dismiss homeopathy. Again the analogy doesn’t hold. Research shows that there is an effect associated with homeopathic treatments – something happens in some cases. But the theory of homeopathy doesn’t make sense in the context of what we know about biology, chemistry and physics. This suggests that the problem lies in the explanation for the effect, not the effect itself. But the concept of learning styles doesn’t conflict with what we know about the way people learn. It’s quite possible that people do have stable traits when it comes to learning. Whether or not they do, and if they do what those traits are is another matter.

Concluding from complex and variable evidence that learning styles don’t exist, and that not dismissing them out of hand is akin to believing in aliens and homeopathy, looks to me suspiciously like saying  “Phlogiston? Pfft! All that stuff about iron filings increasing in weight when they combust is a load of hooey.”

traditional vs progressive: mathematics, logic and philosophy meet the real world

For thousands of years, human beings have been trying to figure out why the world they live in works in the way it does. But it’s only been in the last five hundred or so that a coherent picture of those explanations has begun to emerge. It’s as if people have long had many of the pieces of the jigsaw, but there was no picture on the box. Because a few crucial pieces were missing, it was impossible to put the puzzle together so that the whole thing made sense.

Some of the puzzle pieces that began to make sense to the ancient Greeks involved mathematics – notably geometry. They assumed that if the consistent principles of geometry could be reliably applied to the real world, then it was likely other mathematical principles and the principles underlying mathematics (logic) could too. So philosophers started to use logic to study the fundamental nature of things.

Unfortunately for the mathematicians, logicians and philosophers the real world didn’t always behave in ways that mathematics, logic and philosophy predicted. And that’s why we developed science as we know it today. Scientific theories are tested against observations. If the observations fit the theory we can take the theory to be true for the time being. As soon as observations don’t fit the theory, it’s back to the drawing board. As far as science is concerned we can never be 100% sure of anything, but obviously we can be pretty sure of some things, otherwise we wouldn’t be able to cure diseases, build aircraft that fly or land probes on Mars.

unknown unknowns

Mathematics, logic and philosophy provide useful tools for helping us make sense of the real world, but those tools have limitations. One of the limitations is that the real world contains unknowns. Not only that, but as Donald Rumsfeld famously pointed out, some unknowns are unknown – we don’t always know what we don’t know. You can work out the unknowns in a set of mathematical equations – but not if you don’t know how many unknowns there are.

Education theory is a case in point. It has, from what I’ve seen, always been a bit of a mess. That’s not surprising, given that education is a heavily derived field; it encompasses a wide range of disciplines from sociology and politics to linguistics and child development. Bringing together core concepts from all relevant disciplines to apply them to education is challenging. There’s a big risk of oversimplifying theory, particularly if you take mathematics, logic or philosophy as your starting point.

That’s because it’s tempting, if you are familiar with mathematics, logic or philosophy but don’t have much experience of messier sciences like genetics, geography or medicine, to assume that the real world will fit into the mathematical, logical or philosophical grand scheme of things. It won’t. It’s also tempting to take mathematics, logic or philosophy as your starting point for developing educational theory on the assumption that rational argument will cut a clear path through the real-world jungle. It won’t.

The underlying principles of mathematics, logic and philosophy are well-established, but once real-world unknowns get involved, those underlying principles, although still valid, can’t readily be applied if you don’t know what you’re applying them too. If you haven’t identified all the causes of low school attendance, say, or if you assume you’ve identified all the causes of low school attendance when you haven’t.

traditional vs progressive

Take, for example, the ongoing debate about the relative merits of traditional vs progressive education. Critics often point out that framing educational methods as either traditional or progressive is futile for several reasons. People have different views about which methods are traditional and which are progressive, teachers don’t usually stick to methods they think of as being one type or the other, and some methods could qualify as both traditional and progressive. In short, critics claim that the traditional/progressive dichotomy is a false one.

This criticism has been hotly contested, notably by self-styled proponents of traditional methods. In a recent post, Greg Ashman contended that Steve Watson, as an author of a study comparing ‘traditional or teacher-centred’ to ‘student-centred’ approaches to teaching mathematics, was inconsistent here in claiming that the traditional/progressive dichotomy was a false one.

Watson et al got dragged into the traditional/progressive debate because of the terminology they used in their study. First off, they used the terms ‘teacher-centred’ and ‘student-centred’. In their study, ‘teacher-centred’ and ‘student-centred’ approaches are defined quite clearly. In other words ‘teacher-centred’ and ‘student-centred’ are descriptive labels that, for the purposes of the study, are applied to two specific approaches to mathematics teaching. The researchers could have labelled the two types of approach anything they liked – ‘a & b’, ‘Laurel & Hardy’ or ‘bacon & eggs’- but giving them descriptive labels has obvious advantages for researcher and reader alike. It doesn’t follow that the researchers believe that all educational methods can legitimately be divided into two mutually exclusive categories either ‘teacher-centred’ or ‘student-centred’.

Their second slip-up was using the word ‘traditional’. It’s used three times in their paper, again descriptively, to refer to usual or common practice. And again, the use of ‘traditional’ as a descriptor doesn’t mean the authors subscribe to the idea of a traditional/progressive divide. It’s worth noting that they don’t use the word ‘progressive’ at all.

words are used in different ways

Essentially, the researchers use the terms ‘teacher-centred’, ‘student-centred’ and ‘traditional’ as convenient labels for particular educational approaches in a specific context. The approaches are so highly specified that other researchers would stand a good chance of accurately replicating the study if they chose to do so.

Proponents of the traditional/progressive dichotomy are using the terms in a different way – as labels for ideas. In this case, the ideas are broad, mutually exclusive categories to which all educational approaches, they assume, can be allocated; the approaches involved are loosely specified, if indeed they are specified at all.

Another dichotomy characterises the traditional/progressive divide; teacher-centred vs student-centred methods. In his post on the subject, Greg appears to make three assumptions about Watson et al’s use of the terms ‘teacher-centred’ and ‘student-centred’ to denote two specific types of educational method;

• because they use the same terms as the traditional/progressive dichotomy proponents, they must be using those terms in the same way as the traditional/progressive dichotomy proponents, therefore
• whatever they claim to the contrary, they evidently do subscribe to the traditional/progressive dichotomy, and
• if the researchers apply the terms to two distinct types of educational approach, all educational methods must fit into one of the two mutually exclusive categories.

Commenting on his post, Greg says “to prove that it is a false dichotomy then you would have to show that one can use child-centred or teacher-centred approaches at the same time or that there is a third alternative that is commonly used”.  I pointed out that whether child-centred and teacher-centred are mutually exclusive depends on what you mean by ‘at the same time’ (same moment? same lesson?) and suggested collaborative approaches as a third alternative. Greg obviously didn’t accept that but omitted to explain why.

Collaborative approaches to teaching and learning were used extensively at the primary school I attended in the 1960s, and I’ve found them very effective for educating my own children. Collaboration between teacher and student could be described as neither teacher-centred nor student-centred, or as both. By definition it isn’t either one or the other.

tired of talking about traditional/progressive?

Many teachers say they are tired of never-ending debates about traditional/progressive methods and of arguments about whether or not the traditional/progressive dichotomy is a false one. I can understand why; the debates often generate more heat than light whilst going round in the same well-worn circles. So why am I bothering to write about it?

The reason is that simple dichotomies have intuitive appeal and can be very persuasive to people who don’t have the time or energy to think about them in detail. It’s all too easy to frame our thinking in terms of left/right, black/white or traditional/progressive and to overlook the fact that the world doesn’t fit neatly into those simple categories and that the categories might not be mutually exclusive. Proponents of particular policies, worldviews or educational approaches can marshal a good deal of support by simplistic framing even if that completely overlooks the complex messiness of the real world and has significant negative outcomes for real people.

The effectiveness of education, in the English speaking world at least, has been undermined by the overuse for decades of the traditional/progressive dichotomy. When I was training as a teacher, if it wasn’t progressive (whatever that meant) it was bad; for some teachers now, if it isn’t traditional (whatever that means) it’s bad. What we all need is a range of educational methods that are effective in enabling students to learn. Whether those methods can be described as traditional or progressive is not only neither here nor there, trying to fit methods into those categories serves, as far as I can see, no useful purpose whatsoever for most of us.

joining the dots and seeing the big picture

I’m a tad cynical about charitable bodies these days, especially if they’re associated with academies. Whilst reading their ostensibly ‘independent’ reports I’m on the lookout for phrasing calculated to improve their chances of doing well in the next funding round, or for ‘product placement’ for their services. So a report from the Driver Youth Trust – Joining the Dots: Have recent reforms worked for those with SEND? was a welcome surprise.

The Driver Youth Trust (DYT) is a charity focused on the needs of dyslexic students. Its programme Drive for Literacy is used in ARK schools. I’m well aware of the issues around ‘dyslexia’ and haven’t investigated the Drive for Literacy; in this post I want to focus on Joining the Dots, commissioned by DYT and written by LKMco.

Joining the Dots one of the clearest, most perceptive overviews of the new SEND system that I’ve read. Some of the findings and explanations for the findings are counterintuitive, often a sign of report driven by the evidence rather than what the report writers think they are expected to say. The take-home message is that the new SEND system has had mixed outcomes to date, but the additional autonomy schools now have should allow them to improve outcomes for children regardless, and it presents some inspiring case studies to prove the point.

Here are some of the findings that stood out for me.

SEND reforms interact with the rest of the education system

Reforms to the school system since 2010 have had an even greater impact on young people with SEND than the 2014 Act itself…we find that changes have often enabled those previously succeeding to achieve even better outcomes, while things have only got tougher for those already struggling. As a result unacceptable levels of inequity have merely been reinforced. It is also clear that changes have been inadequately communicated and that many stakeholders (including parents in particular) are struggling to navigate the new landscape.” (p.7)

Fragmentation

“I think that what we did is picked up all the fragments, dropped them on the floor and made them even more fragmented… and now it’s a question of putting them back together in the right order…” – LA service delivery manager (p.15)

SEND pupils and their families have therefore found themselves lost in a system that has yet to reform or regroup.” (p.17)

Funding

Three levels of funding are available for schools: Element 1 is basic funding for all pupil, Element 2 is a notional SEND budget based on a range of factors, and Element 3 is high needs block funding mainly for pupils with EHC plans. The lack of ring-fencing around of the notional SEND budget means that schools can spend this money however they want. (p.20)

Admissions

Pupils with SEND require additional resources and their often lower attainment can impact on the school’s standing in league tables. Parents and teachers reported concerns about admissions policies being stacked against students with SEND.

The local offer

The DfE Final Impact Report for the Pathfinder LAs trialling the new SEND framework found that only 12% of Pathfinder families had looked at their Local Offer and only half of those had found it useful. That picture doesn’t seem to have changed. An FOI request revealed that the number of LA staff with responsibility for SEND varies between 0-382.8 full time equivalent.

Schools

Schools often don’t know what information to give to the LA about their SEND pupils, and the information LAs give schools is sometimes inaccurate. The Plumcroft Primary case study illustrates the point. Plumcroft’s new headteacher tried to improve LA support for pupils with SEND but realised that services available commercially and privately were not only often better, but were actually affordable. As he put it; “If a local authority says ‘no you can’t’ most people just go ‘alright then’ and carry on with the service and whinge about it. Whereas the reality is, you can… there’s no constraint at all.” (p.35)

Categories

The new SEND system does away with the School Action and School Action Plus categories, partly because of concerns that children identified as having SEN were stuck with the label even when it was no longer applicable. The number of children identified with SEN has dropped substantially since, but concerns have been voiced about how children with additional needs are being identified and supported.

Brian Lamb highlights another concern that emerged in the early stages of the legislation, that pupils who would previously have had a Statement, would, under the new system, find it ‘difficult to impossible’ to qualify for an EHCP unless they also have health difficulties or are in care (p.39). This fear doesn’t seem to have materialised, since LAs are now transferring pupils from statements to EHC plans en masse, and it’s in the interest of service providers to ask for an EHC plan to be in place in order to resource any substantial support a child needs.

All teachers are teachers of children with special educational needs

Even though the DfE itself said in 2001 that ‘all teachers are teachers of children with special educational needs’ teacher training funding has consistently failed to recognise this. The new system hasn’t introduced significant improvements.

Exam reform

A shift to making public examinations more demanding in terms of literacy automatically puts students with literacy difficulties at a disadvantage. A student might have an excellent knowledge and understanding of the subject matter, but be unable to get it down on paper. The distribution of assistive technology varies widely between schools.

Reinventing the wheel

LA bureaucracy has been seen as a significant factor in the move over recent years to give schools increased autonomy. This has resulted, predictably, in increased concerns over transparency, accountability, expertise and resources. Many schools are now forming federations in order to pool resources and share expertise. There is clearly a need for an additional tier of organisation at the local level suggesting that it might have been more sensible to improve local authority practice rather than marginalise it.

The content of the report might not be especially cheering, but it makes a change to find a report that’s so readable, informative and insightful.

learning styles: a response to Greg Ashman

In a post entitled Why I’m happy to say that learning styles don’t exist Greg Ashman says that one of the arguments I used in my previous post about learning styles “seems to be about the semantics of falsification“. I’m not sure that semantics is quite the right term, but the falsification of hypotheses certainly was a key point. Greg points out that “falsification does not meaning proving with absolute certainty that something does not exist because you can’t do this and it would therefore be impossible to falsify anything”. I agree completely. It’s at the next step that Greg and I part company.

Greg seems to be arguing that because we can’t falsify a hypothesis with absolute certainty, sufficient evidence of falsification is enough to be going on with. That’s certainly true for science as a work-in-progress. But he then goes on to imply that if there’s little evidence that something exists, the lack of evidence for its existence is good enough to warrant us concluding it doesn’t exist.

I’m saying that because we can’t falsify a hypothesis with absolute certainty, we can never legitimately conclude that something doesn’t exist. All we can say is that it’s very unlikely to exist. Science isn’t about certainty, it’s about reducing uncertainty.

My starting point is that because we don’t know anything with absolute certainty, there’s no point making absolutist statements about whether things exist or not. That doesn’t get us anywhere except into pointless arguments.

Greg’s starting point appears to be that if there’s little evidence that something exists, we can safely assume it doesn’t exist, therefore we are justified in making absolutist claims about its existence.

Claiming categorically that learning styles, Santa Claus or fairies don’t exist is unlikely to have a massively detrimental impact on people’s lives. But putting the idea into teachers’ heads that good-enough falsification allows us to dismiss outright the existence of anything for which there’s little evidence is risky. The history of science is littered with tragic examples of theories being prematurely dismissed on the basis of little evidence – germ theory springing first to mind.

testing the learning styles hypothesis

Greg also says “a scientific hypothesis is one which makes a testable prediction. Learning styles theories do this.”

No they don’t. That’s the problem. Mathematicians can precisely define the terms in an equation. Philosophers can decide what they want the entities in their arguments to mean. Thanks to some sterling work on the part of taxonomists there’s now a strong consensus on what a swan, or a crow or duck-billed platypus are, rather than the appalling muddle that preceded it. But learning styles are not terms in an equation, or entities in philosophical arguments. They are not even like swans, crows or duck-billed platypuses; they are complex, fuzzy conceptual constructs. Unless you are very clear about how the particular constructs in your learning styles model can be measured, so that everyone who tests your model is measuring exactly the same thing, the hypotheses might be testable in principle but in reality it’s quite likely no one has has tested them properly. And that’s before you even get to what the conceptual constructs actually map on to in the real world.

This is a notorious problem for the social sciences. It doesn’t follow that all conceptual constructs are invalid, or that all hypotheses involving them are pseudoscience, or that the social sciences aren’t sciences at all. All it means is that social scientists often need to be a lot more rigorous than they have been.

I don’t understand why it’s so important for Daniel Willingham or Tom Bennett or Greg Ashman to categorise learning styles – or anything else for that matter – as existing or not. The evidence for the existence of Santa Claus, fairies or the Loch Ness monster is pretty flimsy, so most of us work on the assumption that they don’t exist. The fact that we can’t prove conclusively that they don’t exist doesn’t mean that we should be including them in lesson plans. But I’m not advocating the use of Santa Claus, fairies, the Loch Ness monster or learning styles in the classroom. I’m pointing out that saying ‘learning styles don’t exist’ goes well beyond what the evidence claims and, contrary to what Greg says in his post, implies that we can falsify a hypothesis with absolute certainty.

Absence of evidence is not evidence of absence. That’s an important scientific principle. It’s particularly relevant to a concept like learning styles, which is an umbrella term for a whole bunch of models encompassing a massive variety of allegedly stable traits, most of which have been poorly operationalized and poorly evaluated in terms of their contribution – or otherwise – to learning. The evidence about learning styles is weak, contradictory and inconclusive. I can’t see why we can’t just say that it’s weak, contradictory and inconclusive, so teachers would be well advised to give learning styles a wide berth – and leave it at that.

learning styles: what does Tom Bennett* think?

Tom Bennett’s disdain for learning styles is almost palpable, reminiscent at times of Richard Dawkins commenting on a papal pronouncement, but it started off being relatively tame. In May 2013, in a post on the ResearchEd2013 website coinciding with the publication of his book Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he asks ‘why are we still talking about learning styles?’ and claims “there is an overwhelming amount of evidence suggesting that learning styles do not exist, and that therefore we should not be instructing students according to these false preferences.

In August the same year for his New Scientist post Separating neuromyths from science in education, he tones down the claim a little, pointing out that learning styles models are “mostly not backed by credible evidence”.

But the following April, Tom’s back with a vitriologic vengeance in the TES with Zombie bølløcks: World War VAK isn’t over yet. He rightly – and colorfully – points out that time or resources shouldn’t be wasted on initiatives that have not been demonstrated to be effective. And he’s quite right to ask “where were the educationalists who read the papers, questioned the credentials and demanded the evidence?” But Bennett isn’t just questioning, he’s angry.

He’s thinking of putting on his “black Thinking Hat of reprobation and fury”. Why? Because “it’s all bølløcks, of course. It’s bølløcks squared, actually, because not only has recent and extensive investigation into learning styles shown absolutely no correlation between their use and any perceptible outcome in learning, not only has it been shown to have no connection to the latest ways we believe the mind works, but even investigation of the original research shows that it has no credible claim to be taken seriously. Learning Styles are the ouija board of serious educational research” and he includes a link to Pashler et al to prove it.

Six months later, Bennett teams up with Daniel Willingham for a TES piece entitled Classroom practice – Listen closely, learning styles are a lost cause in which Willingham reiterates his previous arguments and Tom contributes an opinion piece dismissing what he calls zombie theories, ranging from red ink negativity to Neuro-Linguistic Programming and Multiple Intelligences.

why learning styles are not a neuromyth

Tom’s anger would be justified if he were right. But he isn’t. In May 2013, in Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it he says of the VAK model “And yet there is no evidence for it whatsoever. None. Every major study done to see if using learning style strategies actually work has come back with totally negative results” (p.144). He goes on to dismiss Kolb’s Learning Style Inventory and Honey and Mumford’s Learning Styles Questionnaire, adding “there are others but I’m getting tired just typing all the categories and wondering why they’re all so different and why the researchers disagree” (p.146). That tells us more about Tom’s evaluation of the research than it does about the research itself.

Education and training research has long suffered from a serious lack of rigour. One reason for that is that they are both heavily derived fields of discourse; education and training theory draws on disciplines as diverse as psychology, sociology, philosophy, politics, architecture, economics and medicine. Education and training researchers need a good understanding of a wide range of fields. Taking all relevant factors into account is challenging, and in the meantime teachers and trainers have to get on with the job. So it’s tempting to get an apparently effective learning model out there ASAP, rather than make sure it’s rigorously tested and systematically compared to other learning models first.

Review paper after review paper has come to similar conclusions when evaluating the evidence for learning styles models:

• there are many different learning styles models, featuring many different learning styles
• it’s difficult to compare models because they use different constructs
• the evidence supporting learning styles models is weak, often because of methodological issues
• some models do have validity or reliability; others don’t
• people do have different aptitudes in different sensory modalities, but
• there’s no evidence that teaching/training all students in their ‘best’ modality improves performance.

If Tom hadn’t got tired typing he might have discovered that some learning styles models have more validity than the three he mentions. And if he’d read the Coffield review more carefully he would have found out that the reason models are so different is because they are based on different theories and use different (often poorly operationalized) constructs and that researchers disagree for a host of reasons, a phenomenon he’d do well to get his head round if he wants teachers to get involved in research.

evaluating the evidence

Reviewers of learning styles models have evaluated the evidence by looking in detail at its content and quality and have then drawn general conclusions. They’ve examined, for example, the validity and reliability of component constructs, what hypotheses have been tested, the methods used in evaluating the models and whether studies have been peer-reviewed.

What they’ve found is that people do have learning styles (depending on how learning style is defined), but there are considerable variations in validity and reliability between learning styles models, and that overall the quality of the evidence isn’t very good. As a consequence, reviewers have been in general agreement that there isn’t enough evidence to warrant teachers investing time or resources in a learning styles approach in the classroom.

But Tom’s reasoning appears to move in the opposite direction; to start with the conclusion that teachers shouldn’t waste time or resources on learning styles, and to infer that;

variable evidence means all learning styles models can be rejected
poor quality evidence means all learning styles models can be rejected
• if some learning styles models are invalid and unreliable they must all be invalid and unreliable
if the evidence is variable and poor and some learning styles models are invalid or unreliable, then
• learning styles don’t exist.

definitions of learning style

It’s Daniel Willingham’s video Learning styles don’t exist that sums it up for Tom. So why does Willingham say learning styles don’t exist? It all depends on definitions, it seems. On his learning styles FAQ page Willingham says;

I think that often when people believe that they observe obvious evidence for learning styles, they are mistaking it for abilityThe idea that people differ in ability is not controversial—everyone agrees with that. Some people are good at dealing with space, some people have a good ear for music, etc. So the idea of “style” really ought to mean something different. If it just means ability, there’s not much point in adding the new term.

This is where Willingham lost me. Obviously, a preference for learning in a particular way is not the same as an ability to learn in a particular way. And I agree that there’s no point talking about style if what you mean is ability. The VAK model claims that preference is an indicator of ability, and the evidence doesn’t support that hypothesis.

But not all learning styles models are about preference; most claim to identify patterns of ability. That’s why learning styles models have proliferated; employers want a quick overall assessment of employees’ strengths and weaknesses when it comes to learning. Because the models encompass factors other than ability – such as personality and ways of approaching problem-solving – referring to learning styles rather than ability seems reasonable.

So if the idea that people differ in ability is not controversial, many learning styles models claim to assess ability, and some are valid and/or reliable, how do Willingham and Bennett arrive at the conclusion that learning styles don’t exist?

The answer, I suspect, is that what they are equating learning styles with the VAK model, most widely used in primary education. It’s no accident that Coffield et al evaluated learning styles and pedagogy in post-16 learning; it’s the world outside the education system that’s the main habitat of learning styles models. It’s fair to say there’s no evidence to support the VAK model – and many others – and that it’s not worth teachers investing time and effort in them. But the evidence simply doesn’t warrant lumping together all learning styles models and dismissing them outright.

taking liberties with the evidence

I can understand that if you’re a teacher who’s been consistently told that learning styles are the way to go and then discover there’s insufficient evidence to warrant you using them, you might be a bit miffed. But Tom’s reprobation and fury doesn’t warrant him taking liberties with the evidence. This is where I think Tom’s thinking goes awry;

• If the evidence supporting learning styles models is variable it’s variable. It means some learning styles models are probably rubbish but some aren’t. Babies shouldn’t be thrown out with bathwater.

• If the evidence evaluating learning styles is of poor quality, it’s of poor quality. You can’t conclude from poor quality evidence that learning styles models are rubbish. You can’t conclude anything from poor quality evidence.

• If the evidence for learning styles models is variable and of poor quality, it isn’t safe to conclude that learning styles don’t exist. Especially if review paper after review paper has concluded that they do – depending on your definition of learning styles.

I can understand why Willingham and Bennett want to alert teachers to the lack of evidence for the VAK learning styles model. But I felt Daniel Willingham’s claim that learning styles don’t exist is misleading and that Tom Bennett’s vitriol was unjustified. There’s a real risk in the case of learning styles of one neuromyth being replaced by another.

*Tom appears to have responded to this post here and here. With yet another article two more articles about zombies.

References
Coffield F., Moseley D., Hall, E. & Ecclestone, K. (2004). Learning styles and pedagogy in post-16 learning: A systematic and critical review. Learning and Skills Research Council.

Pashler, H. McDaniel, M. Rohrer, D. and Bjork, R. (2008). Learning Styles: Concepts and Evidence. Psychological Science in the Public Interest, 9, 106-116.