The Tiger Teachers and cognitive science

Cognitive science is a key plank in the Tiger Teachers’ model of knowledge. If I’ve understood it properly the model looks something like this:

Cognitive science has discovered that working memory has limited capacity and duration, so pupils can’t process large amounts of novel information. If this information is secured in long-term memory via spaced, interleaved practice, students can recall it instantly whenever they need it, freeing up working memory for thinking.

What’s wrong with that? Nothing, as it stands. It’s what’s missing that’s the problem.

Subject knowledge

One of the Tiger Teachers’ beefs about the current education system is its emphasis on transferable skills. They point out that skills are not universally transferable, many are subject-specific, and in order to develop expertise in higher-level skills novices need a substantial amount of subject knowledge. Tiger Teachers’ pupils are expected to pay attention to experts (their teachers) and memorise a lot of facts before they can comprehend, apply, analyse, synthesise or evaluate. The model is broadly supported by cognitive science and the Tiger Teachers apply it rigorously to children. But not to themselves, it seems.

For most Tiger Teachers cognitive science will be an unfamiliar subject area. That makes them (like most of us) cognitive science novices. Obviously they don’t need to become experts in cognitive science to apply it to their educational practice, but they do need the key facts and concepts and a basic overview of the field. The overview is important because they need to know how the facts fit together and the limitations of how they can be applied.   But with a few honourable exceptions (Daisy Christodoulou, David Didau and Greg Ashman spring to mind – apologies if I’ve missed anyone out), many Tiger Teachers don’t appear to have even thought about acquiring expertise, key facts and concepts or an overview. As a consequence facts are misunderstood or overlooked, principles from other knowledge domains are applied inappropriately, and erroneous assumptions made about how science works. Here are some examples:

It’s a fact…

“Teachers’ brains work exactly the same way as pupils’” (p.177). No they don’t. Cognitive science (ironically) thinks that children’s brains begin by forming trillions of connections (synapses). Then through to early adulthood, synapses that aren’t used get pruned, which makes information processing more efficient. (There’s a good summary here.)  Pupils’ brains are as different to teachers’ brains as children’s bodies are different to adults’ bodies. Similarities don’t mean they’re identical.

Then there’s working memory. “As the cognitive scientist Daniel Willingham explains, we learn by transferring knowledge from the short-term memory to the long term memory” (p177). Well, kind of – if you assume that what Willingham explicitly describes as “just about the simplest model of the mind possible”  is an exhaustive model of memory. If you think that, you might conclude, wrongly, “the more knowledge we have in long-term memory, the more space we have in our working memory to process new information” (p.177). Or that “information cannot accumulate into long-term memory while working memory is being used” (p.36).

Long-term memory takes centre stage in the Tiger Teachers’ model of cognition. The only downside attributed to it is our tendency to forget things if we don’t revisit them (p.22). Other well-established characteristics of long-term memory – its unreliability, errors and biases – are simply overlooked, despite Daisy Christodoulou’s frequent citation of Daniel Kahneman whose work focused on those flaws.

With regard to transferable skills we’re told “cognitive scientist Herb Simon and his colleagues have cast doubt on the idea that there are any general or transferable cognitive skills” (p.17), when what they actually cast doubt on is the ideas that all skills are transferable or that none are.

The Michaela cognitive model is distinctly reductionist; “all there is to intelligence is the simple accrual and tuning of many small units of knowledge that in total produce complex cognition” (p.19). Then there’s “skills are simply just a composite of sequential knowledge – all skills can be broken down to irreducible pieces of knowledge” (p.161).

The statement about intelligence is a direct quote from John Anderson’s paper ‘A Simple Theory of Complex Cognition’ but Anderson isn’t credited, so you might not know he was talking about simple encodings of objects and transformations, and that by ‘intelligence’ he means how ants behave rather than IQ. I’ve looked at Daisy Christodoulou’s interpretation of Anderson’s model here.

The idea that intelligence and skills consist ‘simply just’ of units of knowledge ignores Anderson’s procedural rules and marginalises the role of the schema – the way people configure their knowledge. Joe Kirby mentions “procedural and substantive schemata” (p. 17), but seems to see them only in terms of how units of knowledge are configured for teaching purposes; “subject content knowledge is best organised into the most memorable schemata … chronological, cumulative schemata help pupils remember subject knowledge in the long term” (p.21). The concept of schemata as the way individuals, groups or entire academic disciplines configure their knowledge, that the same knowledge can be configured in different ways resulting in different meanings, or that configurations sometimes turn out to be profoundly wrong, doesn’t appear to feature in the Tiger Teachers’ model.

Skills: to transfer or not to transfer?

Tiger Teachers see higher-level skills as subject-specific. That hasn’t stopped them applying higher-level skills from one domain inappropriately to another. In her critique of Bloom’s taxonomy, Daisy Christodoulou describes it as a ‘metaphor’ for the relationship between knowledge and skills. She refers to two other metaphors; ED Hirsch’s scrambled egg and Joe Kirby’s double helix (Seven Myths p.21).  Daisy, Joe and ED teach English, and metaphors are an important feature in English literature. Scientists do use metaphors, but they use analogies more often, because in the natural world patterns often repeat themselves at different levels of abstraction. Daisy, Joe and ED are right to complain about Bloom’s taxonomy being used to justify divorcing skills from knowledge. And the taxonomy itself might be wrong or misleading.   But it is a taxonomy and it is based on an important scientific concept – levels of abstraction – so should be critiqued as such, not as if it were a device used by a novelist.

Not all evidence is equal

A major challenge for novices is what criteria they can use to decide whether or not factual information is valid. They can’t use their overview of a subject area if they don’t have one. They can’t weigh up one set of facts against another if they don’t know enough facts. So Tiger Teachers who are cognitive science novices have to fall back on the criteria ED Hirsch uses to evaluate psychology – the reputation of researchers and consensus. Those might be key criteria in evaluating English literature, but they’re secondary issues for scientific research, and for good reason.

Novices then have to figure out how to evaluate the reputation of researchers and consensus. The Tiger Teachers struggle with reputation. Daniel Willingham and Paul Kirschner are cited more frequently than Herb Simon, but with all due respect to Willingham and Kirschner, they’re not quite in the same league. Other key figures don’t get a mention.  When asked what was missing from the Tiger Teachers’ presentations at ResearchEd, I suggested, for starters, Baddeley and Hitch’s model of working memory. It’s been a dominant model for 40 years and has the rare distinction of being supported by later biological research. But it’s mentioned only in an endnote in Willingham’s Why Don’t Students Like School and in Daisy’s Seven Myths about Education. I recommended inviting Alan Baddeley to speak at ResearchEd – he’s a leading authority on memory after all.   One of the teachers said he’d never even heard of him. So why was that teacher doing a presentation on memory at a national education conference?

The Tiger Teachers also struggle with consensus. Joe Kirby emphasises the length of time an idea has been around and the number of studies that support it (pp.22-3), overlooking the fact that some ideas can dominate a field for decades, be supported by hundreds of studies and then turn out to be profoundly wrong; theories about how brains work are a case in point.   Scientific theory doesn’t rely on the quantity of supporting evidence; it relies on an evaluation of all relevant evidence – supporting and contradictory – and takes into account the quality of that evidence as well.  That’s why you need a substantial body of knowledge before you can evaluate it.

The big picture

For me, Battle Hymn painted a clearer picture of the Michaela Community School than I’d been able to put together from blog posts and visitors’ descriptions. It persuaded me that Michaela’s approach to behaviour management is about being explicit and consistent, rather than simply being ‘strict’. I think having a week’s induction for new students and staff (‘bootcamp’) is a great idea. A systematic, rigorous approach to knowledge is vital and learning by rote can be jolly useful. But for me, those positives were all undermined by the Tiger Teachers’ approach to their own knowledge.  Omitting key issues in discussions of Rousseau’s ideas, professional qualifications or the special circumstances of schools in coastal and rural areas, is one thing. Pontificating about cognitive science and then ignoring what it says is quite another.

I can understand why Tiger Teachers want to share concepts like the limited capacity of working memory and skills not being divorced from knowledge.  Those concepts make sense of problems and have transformed their teaching.  But for many Tiger Teachers, their knowledge of cognitive science appears to be based on a handful of poorly understood factoids acquired second or third hand from other teachers who don’t have a good grasp of the field either. Most teachers aren’t going to know much about cognitive science; but that’s why most teachers don’t do presentations about it at national conferences or go into print to share their flimsy knowledge about it.  Failing to acquire a substantial body of knowledge about cognitive science makes its comprehension, application, analysis, synthesis and evaluation impossible.  The Tiger Teachers’ disregard for principles they claim are crucial is inconsistent, disingenuous, likely to lead to significant problems, and sets a really bad example for pupils. The Tiger Teachers need to re-write some of the lyrics of their Battle Hymn.

getting the PISA scores under control

The results of the OECD’s 2015 Programme for International Student Assessment (PISA) were published a couple of weeks ago. The PISA assessment has measured the performance of 15 year-olds in Reading, Maths and Science every three years since 2000. I got the impression that teachers and academics (at least those using social media) were interested mainly in various aspects of the analysis. The news media, in contrast, focussed on the rankings. So did the OECD and politicians according to the BBC website. Andreas Schleicher of the OECD mentions Singapore ‘getting further ahead’ and John King US Education Secretary referred to the US ‘losing ground’.

What they are talking about are some single-digit changes in scores of almost 500 points. Although the PISA analysis might be informative, the rankings tell us very little. No one will get promoted or relegated as a consequence of their position in the PISA league table. Education is not football. What educational performance measures do have in common with all other performance measures – from football to manufacturing – is that performance is an outcome of causal factors. Change the causal factors and the performance will change.

common causes vs special causes

Many factors impact on performance. Some fluctuations are inevitable because of the variation inherent in raw materials, climatic conditions, equipment, human beings etc. Other changes in performance occur because a key causal factor has changed significantly. The challenge is in figuring out whether fluctuations are due to variation inherent in the process, or whether they are due to a change in the process itself – referred to as common causes and special causes, respectively.

The difference between common causes and special causes is important because there’s no point spending time and effort investigating common causes. Your steel output might have suffered because of a batch of inferior iron ore, your team might have been relegated because two key players sustained injuries, or your PISA score might have fallen a couple of points  due to a flu epidemic just before the PISA tests. It’s impossible to prevent such eventualities and even if you could, some other variation would crop up instead. However, if performance has improved or deteriorated following a change in supplier, strategy or structure you’d want to know whether or not that special cause has had a real impact.

spotting the difference

This was the challenge facing Walter A Shewhart, a physicist, engineer and statistician working for the Western Electric Company in the 1920s. Shewhart figured out a way of representing variations in performance so that quality controllers could see at a glance whether the variation was due to common causes or special causes. The representation is generally known as a control chart. I thought it might be interesting to plot some PISA results as a control chart, to see if changes in scores represented a real change or whether they were the fluctuations you’d expect to see due to variation inherent in the process.

If I’ve understood Shewhart’s reasoning correctly, it goes like this: Even if you don’t change your process, fluctuations in performance will occur due to the many different factors that impact on the effectiveness of your process. In the case of the UK’s PISA scores, each year similar students have learned and been assessed on very similar material, so the process remains unchanged; what the PISA scores measure is student performance.   But student performance can be affected by a huge number of factors; health, family circumstances, teacher recruitment, changes to the curriculum a decade earlier etc.

For statistical purposes, the variation caused by those multiple factors can be treated as random. (It isn’t truly random, but for most intents and purposes can be treated as if it is.) This means that over time, UK scores will form a normal distribution – most will be close to the mean, a few will be higher and a few will be lower. And we know quite a bit about the features of normal distributions.

Shewhart came up with a formula for calculating the upper and lower limits of the variation you’d expect to see as a result of common causes. If a score falls outside those limits, it’s worth investigating because it probably indicates a special cause. If it doesn’t, it isn’t worth investigating, because it’s likely to be due to common causes rather than a change to the process. Shewhart’s method is also useful for finding out whether or not an intervention has made a real difference to performance.  Donald Wheeler, in Understanding Variation: The key to managing chaos, cites the story of a manager spotting a change in performance outside the control limits and discovering it was due to trucks being loaded differently without the supervisor’s knowledge.

getting the PISA scores under control

I found it surprisingly difficult, given the high profile of the PISA results, to track down historical data and I couldn’t access it via the PISA website – if anyone knows of an accessible source I’d be grateful. Same goes for any errors in my calculations.  I decided to use the UK’s overall scores for Mathematics as an example. In 2000 and 2003 the UK assessments didn’t meet the PISA criteria, so the 2000 score is open to question and the 2003 score was omitted from the tables.

I’ve followed the method set out in Donald Wheeler’s book, which is short, accessible and full of examples. At first glance the formulae might look a bit complicated, but the maths involved is very straightforward. Year 6s might enjoy applying it to previous years’ SATs results.

Step 1: Plot the scores and find the mean.

year 2000* 2003* 2006 2009 2012 2015 mean (Xbar§)
UK maths score 529 495 492 494 492 500.4

Table 1: UK maths scores 2000-2015

* In 2000 and 2003 the UK assessments didn’t meet the PISA criteria, so the 2000 score is open to question and the 2003 score was omitted from the results.

§  I was chuffed when I figured out how to type a bar over a letter (the symbol for mean) but it got lost in translation to the blog post.

pisa-fig-1Fig 1: UK Maths scores and mean score

Step 2: Find the moving range (mR) values and calculate the mean. The moving range is the differences between consecutive scores, referred to as mR values.

year 2000 2003 2006 2009 2012 2015 mean

(R bar)

UK maths score 529 495 492 494 492
mR values 34 3 2 2 10.25

Table 2: moving range (mR values) 2000-2015

pisa-fig-2Fig 2: Differences between consecutive scores (mR values)

Step 3: Calculate the Upper Control Limit for the mR values (UCLR). To do this we multiply the mean of the mR values (Rbar) by 3.27.

UCLR = 3.27 x Rbar = 3.27 x 10.25 = 33.52

pisa-fig-3Fig 3: Differences between scores (mR values) showing upper control limit (UCLR)

Step 4: Calculate the Upper Natural Process Limit (UNPL) for the individual scores using the formula UNPL = Xbar + (2.66 x Rbar )

UNPL = Xbar + (2.66 x Rbar ) = 500.4 + (2.66 x 10.25) = 500.4 + 27.27 = 527.67

Step 5: Calculate the Lower Natural Process Limit (LNPL) for the individual scores using the formula LNPL = Xbar – (2.66 x Rbar )

LNPL = Xbar – (2.66 x Rbar) = 500.4 – (2.66 x 10.25) = 500.4 – 27.27 = 473.13

We can now plot the UK’s Maths scores showing the upper and lower natural process limits – the limits of the variation you’d expect to see as a result of common causes.

pisa-fig-4Fig 4: UK Maths scores showing upper and lower natural process limits

What Fig 4 shows is that the UK’s 2000 Maths score falls just outside the upper natural process limit, so even if the OECD hadn’t told us it was an anomalous result, we’d know that something different happened to the process in that year. You might think this is pretty obvious because there’s such a big difference between the 2000 score and all the others. But what if the score had been just a bit lower?  I put in some other numbers:

score  Xbar  Rbar UCLR UNPL LNPL
529 (actual) 500.4 10.25 33.52 527.67 473.13
520 498.6 8 26.16 519.88 477.32
510 496.6 5.5 17.99 511.23 481.97
500 494.6 3 9.81 502.58 486.62

Table 3: outcomes of alternative scores for year 2000

Table 3 shows if the score had been 520, it would still have been outside the natural process limits, but a score of 510 would have been within them.

pisa-fig-5 Fig 5: UK Maths scores showing upper and lower natural process limits for a year 2000 score of 510

ups, downs and targets

The ups and downs of test results are often viewed as more important than they really are; up two points good, down two points bad – even though a two-point fluctuation might be due to random variation.

The process control model has significant implications for target-setting too. Want to improve your score?  Then you need to work harder or smarter. Never mind the fact that students and teachers can work their socks off only to find that their performance is undermined by a crisis in recruiting maths teachers or a whole swathe of schools converting to academies. Working harder or smarter but ignoring natural variation supports what’s been called Ackoff’s proposition – that “almost every problem confronting our society is a result of the fact that our public policy makers are doing the wrong things and are trying to do them righter”.

To get tough on PISA scores we need to get tough on the causes of PISA scores.

 Reference

Wheeler, DJ (1993).  Understanding variation: The key to managing chaos.  SPC Press Inc, Knoxville, Tennessee.

Reforming the SEND system – for good

In the previous post, I claimed that teacher training and targets were two factors that explained why the current SEND system couldn’t work  –  and why it has never worked effectively.  In this post, I’ll explain my claims about teacher training and targets and suggest how the SEND system could become both effective and sustainable.

 Teacher training

For any system – education, health or social care – to meet the needs of a varied population, two ingredients are vital; expertise and flexibility. Practitioners need the knowledge and experience to deal with any needs they might encounter and the system has to be able to adapt to whatever needs arise.

Bizarrely, teachers have always been expected to teach the 98% or so of children who attend mainstream schools, but have only ever been trained to teach the 80% who don’t have SEN, not the 20% who do. And since funding was withdrawn for special education Master’s degrees in the mid-1980s, SEN expertise has gradually leached out of the education system as a whole as special education teachers have retired. It’s only since 2009 that new SENCOs (special educational needs co-ordinators) have been required to be qualified teachers, and only recent appointees are required to have SEN training. There is still a massive gap in SEND expertise within the education system. How can teachers teach children if they don’t know how to meet their educational needs?

Targets

Setting targets sounds like an obvious way to improve performance. You set the target, expect someone to meet it whatever that takes, and provide some sticks and carrots for their encouragement. Targets, accompanied by sticks and carrots, were part and parcel of the early education system but were abandoned because they didn’t work.  And as quality control researchers have been telling us since at least the 1920s, performance depends on the factors that contribute to it. In the current education system, the measure of school performance is actually pupil performance in SATs or GCSEs. But how children perform in tests is influenced by many factors; their health, family circumstances, life events, quality of teaching, their own learning etc. Schools have little or no control over most of those factors, so to measure school performance by pupil performance in tests is pointless.

Despite the evidence, the current education system still sets targets.  And the sticks and carrots expected to encourage schools to raise their (pupil) performance mean that there are no incentives for a school to invest resources in the education of students who are unlikely to improve the school’s test results. If students aren’t going to meet the ‘expected standard’ however hard they or the school try, why invest resources in them? Why not focus on the children likely to meet the ‘expected standard’ with a bit of extra effort.

So, teacher training and targets have been major factors in marginalising the education of children with SEND. But even if the government had a forehead-slapping moment, cried ‘How foolish we’ve been!’, required all teachers to be trained to teach all the children in their classes, and abandoned its ‘expected standards’ criteria, it would take years to transform the system into a SEND-friendly one. Children with SEND don’t have years to spare and their parents have to deal with the here and now. So what needs to be done?

Parents can’t police the system

This post was prompted by a recent conversation I had with a parent carer forum. The parent carer forum was of the opinion that parents with good knowledge of the national framework and their local offer can use that knowledge to get commissioners and providers to make suitable educational provision for children.

It’s certainly true that knowledge of the national framework and the local offer (however incomplete) can help. How effective it is at getting commissioners and providers to meet their statutory obligations is another matter. Since the new system was introduced, I’ve been told repeatedly that it’s improved outcomes for parents and children. Maybe – but I have yet to see any. What I have seen is parents who know the national framework backwards having to resort to mediation, tribunal, formal complaint, the Local Government Ombudsman and in some cases being advised that their only option is Judicial Review – exactly the kind of problems that prompted the revision of the SEN system in 2014.

Until I had the conversation with the parent carer forum, I’d assumed these hurdles were the unwanted and unintended consequences of flaws in legislation that had been rushed through (the pilot study didn’t finish until after the legislation came into force). Then the penny dropped. The only explanation that made sense was that individual parents challenging commissioners and providers is the government’s chosen method of enforcing the new legislation.

That’s a terrible way of enforcing legislation.  For many parents of children with SEND, it’s as much as they can do to hold the family together. To expect parents in already challenging circumstances to police a flawed system that was rushed through at a time when LAs are struggling with huge budget cuts is, to put vulnerable families in harm’s way. Not only is that strategy likely to fail to bring about compliance on the part of commissioners and providers, it’s morally reprehensible.  For 150 years, if a school failed a child parents have been able to appeal to school boards, independent governors or their LEA for support. Not any more. Parents (and children with SEND) are on their own.

What needs changing and who can change it?

The system still needs to change and if parents don’t change it no one else will, so what to do? Since my family entered the SEN ‘world’ 14 years ago, I’ve seen parents fighting lone battles with their LA; the same battles replicated hundreds, if not thousands, of times. I’ve seen parents new to the system set up support or campaign groups only to discover they are just one in a long line of support or campaign groups that have either burned out or at best brought about change that hasn’t actually made much difference on the ground.

What the individual parents and campaign groups have lacked is focus and organisation. I don’t mean they’ve been unfocussed or disorganised; some of them could focus and organise for England. And there’s no doubt that parent groups were instrumental in getting the SEND system changed. It’s rather that there’s been a lot of duplication of effort and the focus has been on single issues or fighting on all fronts at once rather than on the key points in the system that are causing the problems.

I think the key points are these;

  • Mainstream teachers should know how to teach all the children in mainstream schools.
  • Each child needs an education suitable for them as an individual rather than for the average child in their age group, as the law already requires.
  • Assessment and funding should be the responsibility of separate bodies – the new legislation didn’t do away with the LAs’ conflict of interest.
  • There should be an independent body (with teeth) responsible for implementation and compliance that should support parents in their dealings with commissioners and providers. Parents should not have to resort to legal action except in extreme cases.
  • Parents struggling with the system need more support than they are currently offered. A buddying system matching up parents in similar positions dealing with the same local authority might help. As would training in negotiation.

Much of the negotiation undertaken by individual parents and parent groups is with schools, LA officers or the DfE. And problems with the SEND system are generally seen not as being with the structure of the education system or the SEND legislation, but with implementation. But the problem runs deeper than implementation, and deeper than the SEND legislation. It lies with the structure of the education system as a whole, and with the market model espoused by successive governments. Instead of lobbying LA officers and DfE officials who are trying to implement the law as it stands, groups of parents should be lobbying their local councillors and MPs to ensure that teachers are suitably trained, arbitrary targets are abandoned, and responsibility for implementing the system is distributed more widely. These changes won’t require significant new legislation, but they might require a big shift in thinking.

 

 

 

A short history of special education

In 2006 a House of Commons Select Committee described the special educational needs system as ‘no longer fit for purpose’. By September 2014 a new system was in place. Two years on, it’s safe to say it hasn’t been an unmitigated success. To understand why the new system hasn’t worked – and indeed can’t work – it might help to take a look at the history of special educational needs and disability (SEND).

A short history of SEND

Education became compulsory in England in 1880. Some local school boards set up special schools or special classes within mainstream schools for physically ‘handicapped’* children, but provision was patchy. What took people by surprise was the number of mentally handicapped children who turned up to school.

At the time, teachers were expected to teach according to the official Code – essentially a core curriculum – and many schools in the fledgling national educational system were seriously under-resourced. Teachers were often untrained, paper was very expensive (hence the use of rote learning and slates) and many schools operated in conditions like the one below – with several classes in one room. They just weren’t equipped to cope with children with learning difficulties or disabilities.

Shepherd Street School in Preston in 1902

Shepherd Street School in Preston in 1902§

Two Royal Commissions were set up to investigate the education of handicapped children, and reported in 1889 and 1896 respectively. Both recommended the integration of the children in mainstream schools where possible and that special provision (classes or schools) should be made by school boards. The emphasis was on children acquiring vocational skills so they could earn a living. Those with the most severe mental handicap were deemed ‘ineducable’.

The Royal Commissions’ recommendations, and many others made over the next few decades, were clearly well-intentioned. Everybody wanted the best outcomes for the children. The challenge was how to get there. After WW2, concerns about the special education system increased. Parents felt they had little control, the number of pupils in special schools was rising, and children were still being institutionalised or marginalised from society. In 1973 Margaret Thatcher, then Education Secretary, commissioned a review of the education of handicapped children, led by Mary Warnock, whose Committee of Enquiry reported in 1978. A year later Margaret Thatcher became Prime Minister and some of the Warnock recommendations came into force in the Education Act 1981.

The Warnock report introduced a very different way of understanding ‘handicapped’ children. They were no longer seen as being different, but as having special educational needs – as did up to 20% of the school population. Special educational needs were defined in terms of the support children needed, rather than in terms of their physical or mental impairments. What was envisaged was that many children in special schools would gradually migrate to mainstream, supported by Statements of Special Educational Need. And mainstream schools would gradually become more inclusive, adapting their buildings, equipment and teaching methods to meet an ever wider range of educational needs. The new system might have worked well if the rest of the education system hadn’t changed around it.

Context is crucial; one size doesn’t fit all

The Warnock recommendations were made in the context of a very flexible education system. In 1981 Local Education Authorities (LEAs), schools and teachers had a great deal of autonomy in what was taught, how it was taught and when. That all changed with the 1988 Education Reform Act that heralded a compulsory National Curriculum, SATS and Ofsted. Central government essentially wrested control of education from local bodies, something that had been actively opposed for the previous 100 years – few people wanted education to become a political football.

The new education system was at heart a one-size-fits-all affair. Governments find one-size-fits-all systems very appealing. They look as if they are going to be cheaper to run because professional training, equipment and resources can be standardised and performance can be easily measured. Unfortunately for governments, human populations are not one-size, but are very varied. If a universal service is to meet the needs of a whole population, it won’t do that if it’s designed to meet only the needs of the average person. A stark choice faces those designing universal systems; either they can design a system that meets everybody’s needs and resource it properly, or they can design a system that doesn’t meet everybody’s needs and then spend years trying to sort out the ensuing muddle.

The 1880 education system was one-size-fits-all and the next century was spent sorting out the problems that resulted for handicapped children. There was a brief period after 1981 when the education system took a big step towards meeting the needs of all children, but seven years later it flipped back to one-size-fits-all. The last 30 years have been spent trying unsuccessfully to limit the damage for children with SEND.

So what’s the alternative? The answer isn’t further reform of the SEND system, because the causes of the problems don’t lie within the SEND system, but with the broader education system. Two key causes are teacher training and targets – the subjects of the next post.

*I’ve used the term ‘handicapped’ because it was in widespread use in the education system until the Warnock Committee changed the terminology.
§ © Harris Museum and Art Gallery http://www.mylearning.org/victorian-school-and-work-in-preston/images/1-3215/

We’re all different

We’re all different. Tiny variations in our DNA before and at conception. Our parents smoked/didn’t smoke, drank/didn’t drink, followed ideal/inadequate diets, enjoyed robust health or had food intolerances, allergies, viral infections. We were brought up in middle class suburbia/a tower block and attended inadequate/outstanding schools. All those factors contribute to who we are and what we are capable of achieving.

That variation, inherent in all biological organisms, is vital to our survival as a species. Without it, we couldn’t adapt to a changing environment or form communities that successfully protect us. The downside of that inherent variation is that some of us draw a short straw. Some variations mean we don’t make it through childhood, have lifelong health problems or die young. Or that we become what Katie Ashford, SENCO at Michaela Community School, Wembley calls the ‘weakest pupils‘.

Although the factors that contribute to our development aren’t, strictly speaking, random, they are so many and so varied, they might as well be random. That means that in a large population, the measurement of any characteristic affected by many factors – height, blood pressure, intelligence, reading ability – will form what’s known as a normal distribution; the familiar bell curve.

 The bell curve

If a particular characteristic forms a bell-shaped distribution, that allows us to make certain predictions about a large population. For that characteristic, 50% of the population will score above average and 50% below average; there will be relatively few people who are actually average. We’ll know that around 70% of the population will score fairly close to average, around 25% noticeably above or below it, and around 5% considerably higher or lower. That’s why medical reference ranges for various characteristics are based on the upper and lower measurements for 95% of the population; if your blood glucose levels or thyroid function is in the lowest or highest 2.5%, you’re likely to have a real problem, rather than a normal variation.

So in terms of general ability that means around 2.5% of the population will be in a position to decide whether they’d rather be an Olympic athlete, a brain surgeon or Prime Minister (or all three), whereas another 2.5% will find everyday life challenging.

What does a normal distribution mean for education? Educational attainment is affected by many causal factors, so by bizarre coincidence the attainment of 50% of school pupils is above average, and 50% below it. Around 20% of pupils have ‘special educational needs’ and around 2.5% will have educational needs that are significant enough to warrant a Statement of Special Educational Needs (recently replaced by Education Health and Care Plans).

Special educational needs

In 1978, the Warnock report pointed out that based on historical data, up to 20% of school pupils would probably have special educational needs at some point in their school career. ‘Special educational needs’ has a precise but relative meaning in law. It’s defined in terms of pupils requiring educational provision additional to or different from “educational facilities of a kind generally provided for children of the same age in schools within the area of the local education authority”.

Statements of SEN

The proportion of pupils with statements of SEN remained consistently at around 2.8% between 2005 and 2013 (after which the SEN system changed). http://www.publications.parliament.uk/pa/cm200506/cmselect/cmeduski/478/478i.pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/225699/SFR30-2013_Text.pdf

It could be, of course, that these figures are an artifact of the system; anecdotal evidence suggests that some local authorities considered statutory assessments only for children who scored below the 2nd percentile on the WISC scale. Or it could be that measures of educational attainment do reflect the effectively random nature of the causes of educational attainment. In other words, a single measure of educational attainment can tell us whether a child’s attainment is unusually high or low; it can’t tell us why it’s unusually high or low. That often requires a bit of detective work.

If they can do it, anyone can

Some people feel uncomfortable with the idea of human populations having inherent variation; it smacks of determinism, excuses and complacency. So from time to time we read inspiring accounts of children in a school in a deprived inner city borough all reading fluently by the age of 6, or of the GCSE A*-C grades in a once failing school leaping from 30-60% in a year.  The implication is that if they can do it, anyone can. That’s a false assumption. Those things can happen in some schools. But they can’t happen in all schools simultaneously because of the variation inherent in human populations and because of the nature of life events (see previous post).

Children of differing abilities don’t distribute themselves neatly across schools. Some schools might have no children with statements and others might have many. Even if all circumstances were equal (which they’re not) clustering occurs within random distributions. This is a well-know phenomenon in epidemiology; towns with high numbers of cancer patients or hospitals with high numbers of unexpected deaths where no causal factors are identified tend to attract the attention of conspiracy theorists. This clustering illusion isn’t so well known in educational circles. It’s all too easy to assume that a school has few children with special educational needs because of the high quality of teaching, or that a school has many children with SEN because teaching is poor. Obviously, it’s more complicated than that.

What helps the weakest pupils?

According to Katie what ‘the weakest pupils’ need is “more focus, more rigour and more practice if they are to stand any chance of catching up with their peers”.   Katie goes on to unpack what she means. More focus means classrooms that aren’t chaotic. More rigour means expecting children to read challenging texts. More practice means practicing the things they can’t do, not the things they can.

Katie’s post is based on the assumption that the weakest pupils can and should ‘catch up with their peers.’ But it’s not clear what she means by that. Does she mean the school not needing a bottom set? All pupils attaining at least the national average for their age group? All pupils clustered at the high end of the attainment range?  She doesn’t say.

In a twitter discussion, Katie agreed that there is variation inherent in a population, but

katie ashford bell curve

I agree with Katie that there is often room for improvement, and that her focus, getting all children reading, can make a big difference, but improvement is likely to entail more than more focus, more rigour and more practice. In an earlier post Katie complains that “Too many people overcomplicate the role of SENCO”.   She sees her role as very simple: “I avoid pointless meetings, unnecessary paperwork and attending timewasting conferences as much as possible. Instead, I teach, organise interventions, spend lots of time with the pupils, and make sure teachers and support staff have everything they need to teach their kids really, really well.

Her approach sounds very sensible.  But she doesn’t say what the interventions are. Or what the teachers and support staff need to teach their kids really, really well. Or what meetings, paperwork and conferences she thinks are pointless, unnecessary and timewasting. Katie doesn’t say how many children at Michaela have statements of special needs or EHCPs – presumably some children have arrived there with these in place. Or what she does about the meetings and paperwork involved. Or how she tracks individual children’s progress. (I’m not suggesting that statements and EHCPs are the way to go – just that currently they’re part of the system and SENCOs have to deal with them).

What puzzled me most about Katie’s interventions was that they bore little resemblance to those I’ve seen other SENCOs implement in mainstream schools. It’s possible that they’ve overcomplicated their role.   It could be that the SENCOs I’ve watched at work are in primary schools and that at secondary level it’s different. Another explanation is that they’ve identified the root causes of children’s learning difficulties and have addressed them.

They’ve introduced visual timetables, taught all pupils Makaton, brought in speech and language therapists to train staff, installed the same flooring throughout the building to improve the mobility of children with cerebral palsy or epilepsy and integrated music, movement and drama into the curriculum. They’ve developed assistive technology for children with sensory impairments and built up an extensive, accessible school library that includes easy-to-read books with content suitable for older kids for poor readers and more challenging texts with content suitable for younger kids for good readers. They’ve planted gardens and attended forest schools regularly to develop motor and sensory skills.

As a child who read avidly  –  including Dickens – I can see how many hours of reading and working through chapters of Dickens could improve the reading ability of many children. But I’m still struggling to see how that would work for a kid whose epilepsy results in frequent ‘absences’ of attention or who has weak control of eye movements, an auditory processing impairment or very limited working memory capacity.

I’m aware that ‘special educational needs’ is a contentious label and that it’s often only applied because children aren’t being taught well, or taught appropriately. I’m utterly committed to the idea of every child being given the best possible education. I just don’t see any evidence to support the idea that catching up with one’s peers is a measure of educational excellence, or that practicing what you’re not good at is a better use of time than doing what you are good at.

Section 7 of the Education Act 1996 (based on the 1944 Education Act) frames a suitable education in terms of an individual child’s age, ability and aptitude, and any special educational needs they may have. The education system appears to have recently lost sight of the aptitude element. I fear that an emphasis on ‘catching up’ with one’s peers and on addressing weaknesses rather than developing strengths will inevitably result in many children seeing themselves as failing to jump an arbitrary hurdle, rather than as individuals with unique sets of talents and aptitudes who can play a useful and fulfilling role in society.

I’d be interested in Katie’s comments.

phlogiston for beginners

Say “learning styles” to some teachers and you’re likely to get your head bitten off. Tom Bennett, the government’s behaviour tsar/guru/expert/advisor, really, really doesn’t like the idea of learning styles as he has made clear in a series of blogposts exploring the metaphor of the zombie.

I’ve come in for a bit of flak from various sources for suggesting that Bennett might have rather over-egged the learning styles pudding. I’ve been accused of not accepting the evidence, not admitting when I’m wrong, advancing neuromyths, being a learning styles advocate, being a closet learning styles advocate, and by implication not caring about the chiiiiiiiildren and being responsible for a metaphorical invasion by the undead. I refute all those accusations.

I’m still trying to figure out why learning styles have caused quite so much fuss. I understand that teachers might be a bit miffed about being told by schools to label children as visual, auditory or kinaesthetic (VAK) learners only to find there’s no evidence that they can be validly categorised in that way. But the time and money wasted on learning styles surely pales into insignificance next to the amounts squandered on the industry that’s sprung up around some questionable assessment methods, an SEN system that a Commons Select Committee pronounced not fit for purpose, or a teacher training system that for generations has failed to equip teachers with the skills they need to evaluate popular wheezes like VAK and brain gym.

And how many children have suffered actual harm as a result of being given a learning style label? I’m guessing very few compared to the number whose life has been blighted by failing the 11+, being labelled ‘educationally subnormal’, or more recent forms of failure to meet the often arbitrary requirements of the education system.  What is it about learning styles?

the learning styles neuromyth

I made the mistake of questioning some of the assumptions implicit in this article, notably that the concept of learning styles is a false belief, that it’s therefore a neuromyth and is somehow harmful in that it raises false hopes about transforming society.

My suggestion that the evidence for the learning styles concept is mixed rather than non-existent, that there are some issues around the idea of the neuromyth that need to be addressed, and that the VAK idea, even if wrong, probably isn’t the biggest hole in the education system’s bucket, was taken as a sign that my understanding of the scientific method must be flawed.

the evidence for aliens

One teacher (no names, no pack drill) said “This is like saying the ‘evidence for aliens is mixed’”.  No it isn’t. There are so many planets in the universe it’s highly unlikely Earth is the only one supporting life-forms, but so far, we have next to no evidence of their existence. But a learning style isn’t a life-form, it’s a construct, a label for phenomena that researchers have observed, and a pretty woolly label at that. It could refer to a wide range of very different phenomena, some of which are really out there, some of which are experimental artifacts, and some of which might be figments of a researchers’ imagination. It’s pointless speculating about whether learning styles exist or not because whether they exist or not depends on what you label as a ‘learning style’.  Life-forms are a different kettle of fish; there’s some debate around what constitutes a life-form and what doesn’t, but it’s far more tightly specified than any learning style ever has been.

you haven’t read everything

I was then chided for pointing out that Tom Bennett said he hadn’t finished reading the Coffield Learning Styles Review when (obviously) I hadn’t read everything there was to read on the subject either.   But I hadn’t  complained that Tom hadn’t read everything; I was pointing out that by his own admission in his book Teacher Proof he’d stopped reading before he got to the bit in the Coffield review which discusses learning styles models found to have validity and reliability, so it’s not surprising he came to a conclusion that Coffield didn’t support.

my evidence weighs more than your evidence

Then, “I’ve seen the tiny, tiny evidence you cite to support LS. Dwarfed by oceans of ‘no evidence’. There’s more evidence for ET than LS”. That’s not how the evaluation of scientific evidence works. It isn’t a case of putting the ‘for’ evidence in one pan of the scales and the ‘against’ evidence in the other and the heaviest evidence wins. On that basis, the heliocentric theories of Copernicus and Kepler would have never seen the light of day.
 
how about homeopathy?

Finally “How about homeopathy? Mixed evidence from studies.”   The implication is that if I’m not dismissing learning styles because the evidence is mixed, then I can’t dismiss homeopathy. Again the analogy doesn’t hold. Research shows that there is an effect associated with homeopathic treatments – something happens in some cases. But the theory of homeopathy doesn’t make sense in the context of what we know about biology, chemistry and physics. This suggests that the problem lies in the explanation for the effect, not the effect itself. But the concept of learning styles doesn’t conflict with what we know about the way people learn. It’s quite possible that people do have stable traits when it comes to learning. Whether or not they do, and if they do what those traits are is another matter.

Concluding from complex and variable evidence that learning styles don’t exist, and that not dismissing them out of hand is akin to believing in aliens and homeopathy, looks to me suspiciously like saying  “Phlogiston? Pfft! All that stuff about iron filings increasing in weight when they combust is a load of hooey.”

traditional vs progressive: mathematics, logic and philosophy meet the real world

For thousands of years, human beings have been trying to figure out why the world they live in works in the way it does. But it’s only been in the last five hundred or so that a coherent picture of those explanations has begun to emerge. It’s as if people have long had many of the pieces of the jigsaw, but there was no picture on the box. Because a few crucial pieces were missing, it was impossible to put the puzzle together so that the whole thing made sense.

Some of the puzzle pieces that began to make sense to the ancient Greeks involved mathematics – notably geometry. They assumed that if the consistent principles of geometry could be reliably applied to the real world, then it was likely other mathematical principles and the principles underlying mathematics (logic) could too. So philosophers started to use logic to study the fundamental nature of things.

Unfortunately for the mathematicians, logicians and philosophers the real world didn’t always behave in ways that mathematics, logic and philosophy predicted. And that’s why we developed science as we know it today. Scientific theories are tested against observations. If the observations fit the theory we can take the theory to be true for the time being. As soon as observations don’t fit the theory, it’s back to the drawing board. As far as science is concerned we can never be 100% sure of anything, but obviously we can be pretty sure of some things, otherwise we wouldn’t be able to cure diseases, build aircraft that fly or land probes on Mars.

unknown unknowns

Mathematics, logic and philosophy provide useful tools for helping us make sense of the real world, but those tools have limitations. One of the limitations is that the real world contains unknowns. Not only that, but as Donald Rumsfeld famously pointed out, some unknowns are unknown – we don’t always know what we don’t know. You can work out the unknowns in a set of mathematical equations – but not if you don’t know how many unknowns there are.

Education theory is a case in point. It has, from what I’ve seen, always been a bit of a mess. That’s not surprising, given that education is a heavily derived field; it encompasses a wide range of disciplines from sociology and politics to linguistics and child development. Bringing together core concepts from all relevant disciplines to apply them to education is challenging. There’s a big risk of oversimplifying theory, particularly if you take mathematics, logic or philosophy as your starting point.

That’s because it’s tempting, if you are familiar with mathematics, logic or philosophy but don’t have much experience of messier sciences like genetics, geography or medicine, to assume that the real world will fit into the mathematical, logical or philosophical grand scheme of things. It won’t. It’s also tempting to take mathematics, logic or philosophy as your starting point for developing educational theory on the assumption that rational argument will cut a clear path through the real-world jungle. It won’t.

The underlying principles of mathematics, logic and philosophy are well-established, but once real-world unknowns get involved, those underlying principles, although still valid, can’t readily be applied if you don’t know what you’re applying them too. If you haven’t identified all the causes of low school attendance, say, or if you assume you’ve identified all the causes of low school attendance when you haven’t.

traditional vs progressive

Take, for example, the ongoing debate about the relative merits of traditional vs progressive education. Critics often point out that framing educational methods as either traditional or progressive is futile for several reasons. People have different views about which methods are traditional and which are progressive, teachers don’t usually stick to methods they think of as being one type or the other, and some methods could qualify as both traditional and progressive. In short, critics claim that the traditional/progressive dichotomy is a false one.

This criticism has been hotly contested, notably by self-styled proponents of traditional methods. In a recent post, Greg Ashman contended that Steve Watson, as an author of a study comparing ‘traditional or teacher-centred’ to ‘student-centred’ approaches to teaching mathematics, was inconsistent here in claiming that the traditional/progressive dichotomy was a false one.

Watson et al got dragged into the traditional/progressive debate because of the terminology they used in their study. First off, they used the terms ‘teacher-centred’ and ‘student-centred’. In their study, ‘teacher-centred’ and ‘student-centred’ approaches are defined quite clearly. In other words ‘teacher-centred’ and ‘student-centred’ are descriptive labels that, for the purposes of the study, are applied to two specific approaches to mathematics teaching. The researchers could have labelled the two types of approach anything they liked – ‘a & b’, ‘Laurel & Hardy’ or ‘bacon & eggs’- but giving them descriptive labels has obvious advantages for researcher and reader alike. It doesn’t follow that the researchers believe that all educational methods can legitimately be divided into two mutually exclusive categories either ‘teacher-centred’ or ‘student-centred’.

Their second slip-up was using the word ‘traditional’. It’s used three times in their paper, again descriptively, to refer to usual or common practice. And again, the use of ‘traditional’ as a descriptor doesn’t mean the authors subscribe to the idea of a traditional/progressive divide. It’s worth noting that they don’t use the word ‘progressive’ at all.

words are used in different ways

Essentially, the researchers use the terms ‘teacher-centred’, ‘student-centred’ and ‘traditional’ as convenient labels for particular educational approaches in a specific context. The approaches are so highly specified that other researchers would stand a good chance of accurately replicating the study if they chose to do so.

Proponents of the traditional/progressive dichotomy are using the terms in a different way – as labels for ideas. In this case, the ideas are broad, mutually exclusive categories to which all educational approaches, they assume, can be allocated; the approaches involved are loosely specified, if indeed they are specified at all.

Another dichotomy characterises the traditional/progressive divide; teacher-centred vs student-centred methods. In his post on the subject, Greg appears to make three assumptions about Watson et al’s use of the terms ‘teacher-centred’ and ‘student-centred’ to denote two specific types of educational method;

• because they use the same terms as the traditional/progressive dichotomy proponents, they must be using those terms in the same way as the traditional/progressive dichotomy proponents, therefore
• whatever they claim to the contrary, they evidently do subscribe to the traditional/progressive dichotomy, and
• if the researchers apply the terms to two distinct types of educational approach, all educational methods must fit into one of the two mutually exclusive categories.

Commenting on his post, Greg says “to prove that it is a false dichotomy then you would have to show that one can use child-centred or teacher-centred approaches at the same time or that there is a third alternative that is commonly used”.  I pointed out that whether child-centred and teacher-centred are mutually exclusive depends on what you mean by ‘at the same time’ (same moment? same lesson?) and suggested collaborative approaches as a third alternative. Greg obviously didn’t accept that but omitted to explain why.

Collaborative approaches to teaching and learning were used extensively at the primary school I attended in the 1960s, and I’ve found them very effective for educating my own children. Collaboration between teacher and student could be described as neither teacher-centred nor student-centred, or as both. By definition it isn’t either one or the other.

tired of talking about traditional/progressive?

Many teachers say they are tired of never-ending debates about traditional/progressive methods and of arguments about whether or not the traditional/progressive dichotomy is a false one. I can understand why; the debates often generate more heat than light whilst going round in the same well-worn circles. So why am I bothering to write about it?

The reason is that simple dichotomies have intuitive appeal and can be very persuasive to people who don’t have the time or energy to think about them in detail. It’s all too easy to frame our thinking in terms of left/right, black/white or traditional/progressive and to overlook the fact that the world doesn’t fit neatly into those simple categories and that the categories might not be mutually exclusive. Proponents of particular policies, worldviews or educational approaches can marshal a good deal of support by simplistic framing even if that completely overlooks the complex messiness of the real world and has significant negative outcomes for real people.

The effectiveness of education, in the English speaking world at least, has been undermined by the overuse for decades of the traditional/progressive dichotomy. When I was training as a teacher, if it wasn’t progressive (whatever that meant) it was bad; for some teachers now, if it isn’t traditional (whatever that means) it’s bad. What we all need is a range of educational methods that are effective in enabling students to learn. Whether those methods can be described as traditional or progressive is not only neither here nor there, trying to fit methods into those categories serves, as far as I can see, no useful purpose whatsoever for most of us.