traditional vs progressive: visualisation

In the previous post about the traditional vs progressive education debate, I suggested that visual representation of the arguments might make them clearer. Here, I attempt to do that, starting with Martin Robinson’s conceptual model that he sets out in a post on his blog, Trivium 21c. The diagram below obviously represents my understanding of Martin’s model and might be wrong.  It appears to involve only two mutually exclusive pathways from values and beliefs to customs and practices.

Slide1

I then mapped out my conceptual model. Here’s a first draft:

Slide1

And an explanation:

evidence

You could describe the importance of systems principles, errors and biases, a body of knowledge, human rights and a varied population as my ‘values and beliefs’. But they’re not values and beliefs that sprang fully-formed into my head, nor have they simply been handed down via cultural transmission. They’ve all emerged from a variety of sources over several decades, have been tried-and-tested, and have changed over time.

errors and biases

Everyone views the evidence for what’s optimal politically, socially and educationally through the lens of their own knowledge, understanding and experience. We now know quite a lot about the errors and biases that affect our interpretation of the evidence. Knowing about the errors and biases doesn’t eliminate them, but it can reduce their impact.

systems principles

We also know quite a lot about the features of systems (features of systems generally, not just specific ones). Applying systems principles is essential if an education system is to be effective.

body of knowledge

I agree with Martin that a body of knowledge handed down from the past is crucial to education, but I wouldn’t frame it in terms of ‘the best which has been thought and said’, mainly because that definition begs the question of who decides what’s ‘best’.  I’d frame it instead in terms of validity (what’s been tried-and-tested) and reliability (what’s generally agreed on by experts in relevant fields). It’s important to note that reliability alone isn’t enough – history is replete with examples of experts being collectively wrong. This is one reason why I’m sceptical about Hirsch’s model of cultural literacy.

varied population

Education is a universal good in most countries, and as such has to take into account the characteristics of individuals in a large population. And large populations vary considerably. 70% of children would probably cope with a one-size-fits-all subject-centred education, but 15% would be bored or might question what they were taught because they’d be running ahead of it, and a different 15% would struggle to keep up. I’m not making those claims because I’m an IQ bell-curve believer, but because that’s how large populations work and that’s a pattern that’s emerged over time, from universal education systems.

human rights

I’ve called this core element of education ‘human rights’, but I’m thinking more ‘life, liberty and the pursuit of happiness’ and ‘liberté, égalité, fraternité’ than UNHRC.  General principles tend to be more flexible than statutory ones.

customs and practices

Applying these underlying principles would result in particular educational customs and practices based on teaching an interconnected body of knowledge rather than ‘subjects’, a curriculum that was adaptable rather than ‘personalised’, and a framework for personal and social development. I haven’t detailed the customs and practices because they would vary across and within schools, classes and groups of students.

questioning assumptions

My conceptual model of education is very different to Martin’s, and to other traditional/progressive models.   I would question some of the fundamental assumptions of the traditional/progressive dichotomy.

  1. There has to be one core educational belief or value. Why not two, three, six?
  1. ‘Traditional’ and ‘progressive’ are polar opposites.  The core concepts are, but life is not just about core concepts. Some beliefs, customs and practices that have been handed down are invaluable; others aren’t. Some changes are mistakes; others change everyone’s life for the better.
  1. The traditional v progressive model assumes that the body of knowledge that’s been handed down is valid and reliable, when in fact some parts aren’t and need revising; that’s how knowledge works.
  1. The traditional v progressive model assumes that the only alternative to teaching ‘the best which has been thought or said’ is a ‘personalised curriculum’. It isn’t. The body of knowledge can be adapted to particular groups of students. That’s where professional expertise comes in.

why does any of this matter?

Some of the points of the traditional v progressive debate have been pretty obscure and not everyone recognises the divide, so I can understand why people might be asking why the debate matters.  It matters because a simple but wrong idea can be halfway round the world before a more complex but right idea has got its boots on. And simple but wrong ideas can have a devastating impact on many people, especially when they creep into public policy because politicians are in a tearing hurry to implement vote-winning wheezes.

At one time, a government-commissioned committee of enquiry might take years to examine research findings and evaluate opinions. The Warnock committee’s Enquiry into the education of handicapped children and young people for example, was commissioned by a Conservative education secretary in 1973, reported to a Labour government 5 years later, and some of its recommendations were enacted under another Conservative government, three years after that.

In contrast, Nick Gibb’s recent speech to the Centre for Independent Studies in Sydney relies heavily on anecdotal evidence and references to the opinions of particular contributors to social media. It’s a speech, not a committee of enquiry, but clearly there’s been a shift in the level of rigour.

traditional vs progressive: the meta-debate

The traditional vs progressive education debate has been a contentious one. Some have argued that there’s a clear divide between traditional and progressive education, and others that it’s a false dichotomy. So in addition to the traditional/progressive debate, there’s been a meta-debate about whether or not a traditional/progressive divide actually exists.

the meta-debate

Two features of the meta-debate have puzzled me. One is; amongst those who recognise a traditional/progressive divide, which educational practices are considered traditional and which progressive? The other is; why those who recognise a traditional/progressive divide feel so strongly about people who don’t.

Here for example is the usually mild-mannered Martin Robinson, on his blog Trivium 21C.

So the next time someone argues that progress and tradition are a false dichotomy, think why would they argue this? They are either lying and are using this argument to hide the fact that they are either on one side or the other.”

My initial understanding of Martin’s argument was as follows;

  1. In general, the term traditional means ‘belief, custom or practice being handed down’, ‘from the past’, and ‘conservative in the sense of keeping things the same’. Progressive means ‘advocating reform in political or social matters’, ‘toward the future’ and ‘radical in the sense of reforming things’.
  1. In education, “traditionalists argue for the centrality of subject and progressives argue for the centrality of the child”.
  1. It’s not just Martin who defines the terms in this way; the general meanings are used widely, and the specific educational meanings are shared by John Dewey and Chambers etymological dictionary, no less.
  1. The beliefs, customs and practices referred to by the terms traditional and progressive are mutually exclusive; you can’t prioritise what’s handed down from the past and prioritise reform at the same time, and “the classroom can’t be both subject centred and child centred.”   Therefore the categories traditional and progressive must be mutually exclusive.

I agreed with Martin on some points. Beliefs, customs and practices have indeed been handed down, and political and social reforms have been carried out. Subject centred and child centred education has certainly happened. And there’s widespread agreement on what traditional and progressive mean generally, and in education. However at this point Martin appears make some assumptions, and this is where we parted company.

assumptions

The first assumption is that because certain beliefs, customs and practices exist out there in the real world, the categories to which people assign those beliefs, customs and practices, must also exist out there in the real world; the categories have external validity.

The second assumption is that if there is widespread agreement on what the terms traditional and progressive refer to, the categories traditional and progressive have, for all intents and purposes, a universal meaning; the categories are also reliable.

The third assumption is that the beliefs, customs and practices assigned to the categories traditional and progressive are mutually exclusive, therefore the categories traditional and progressive must be mutually exclusive.

Those assumptions were the only reasons I could think of that would prompt Martin to accuse people of lying or covering up if they claimed that tradition vs progress was a false dichotomy.

I think the assumptions are unfounded, largely because, although there might be widespread agreement about what traditional and progressive refer to, that agreement isn’t universal. Other proponents of the traditional/progressive divide apply different criteria.

differences of opinion

Here’s Old Andrew’s definition from 2014; “Progressive teaching is that which rejects any of the pillars of traditional teaching. These are 1) the existence of a tradition i.e. a body of knowledge necessary for developing the intellect. 2) The use of direct instruction & practice as the most effective methods of teaching. 3) The authority of teachers in the classroom.”*

And here’s Robert Peal, in his book Progressively Worse;

It has become fashionable to pose the ideas of progressive education against those of, for want of a better term, ‘traditional’ education. Educational commentators are likely to say that such ‘polarising rhetoric’ establishes ‘false dichotomies’. When in reality a sensible mix of the two approaches is required. This is true. …Such dichotomies, (skills/knowledge, child-centred/ teacher-led) are perhaps better thought of as sitting at opposite ends of a spectrum.” (p.8)

Each of the three commentators appears to believe that a traditional/progressive divide exist out there in the real world, but they have different ideas about where the divide lies. Or if there are several divides. Or whether the divide is actually a spectrum. But despite differences of opinion about exactly where the divide is, or whether there are any divides as such, each of the commentators cheerfully castigates anyone who questions the location or the existence of the divide.

Robert Peal says in a blogpost that those criticising the categorisation of issues in education are “more often than not just trying to shut down debate.”  Old Andrew has also alleged that those who think the divide is a false dichotomy are in denial about the existence of the debate.

I was perplexed. I just couldn’t see how a wide range of educational theories or practices could be shoe-horned into two mutually exclusive categories, but I wasn’t lying about that, or covering anything up, and I can hardly be accused of wanting to shut down debate.  Then a recent Twitter exchange shed more light on the subject.

trad:prog values

Although proponents of a traditional/progressive divide often refer to values, I’d had no idea that they were basing the divide primarily on values. Or for that matter, what values they might be basing it on.   Martin’s post now made more sense. If he defines traditional and progressive education in terms of single mutually exclusive core values that he believes exist out there in the real world, then I can see why he might feel justified in accusing people who disagree of lying or covering up.

who disagrees?

One problem for people who disagree with proponents of the traditional/progressive divide is that the proponents appear to assume their definition of traditional and progressive education is valid (which is questionable) and reliable (which it clearly isn’t if other proponents of the divide don’t agree about where the divide is).

A second problem is an assumption that the core values that characterise traditional and progressive education are mutually exclusive. I would question that as well. Clearly, education can’t be subject centred and child centred at the same time, but who decided a label can be attached to only one value? Or that education has to be centred on only one thing?

A third problem is that although proponents of the traditional/progressive divide might be arguing that the divide exists only at the level of values (and in Martin’s case might involve only two core values), each of the proponents I’ve cited has made numerous references to practice. This might explain why I, and others, have gone ‘Dichotomy? What dichotomy?’, or have claimed to be eclectic, or somewhere between the two, or whatever.

I’ve argued previously that it might be helpful to represent abstract concepts like traditional and progressive diagrammatically. I still think this would be a good move. A few Venn diagrams and a bit of graphical representation would force all of us to clarify exactly what we mean.

*I can’t locate the original tweets, but blogged about them here.

play or direct instruction in early years?

One of the challenges levelled at advocates of the importance of play for learning in the Early Years Foundation Stage (EYFS) has been the absence of solid evidence for its importance. Has anyone ever tested this theory? Where are the randomised controlled trials?

The assumption that play is an essential vehicle for learning is widespread and has for many years dominated approaches to teaching young children. But is it anything more than an assumption?  I can understand why critics have doubts.  After all, EY teachers tend to say “Of course play is important. Why would you question that?” rather than “Of course play is important (Smith & Jones, 1943; Blenkinsop & Tompkinson, 1972).”  I think there are two main reasons why EY teachers tend not to cite the research.

why don’t EY teachers cite the research?

First, the research about play is mainly from the child development literature rather than the educational literature. There’s a vast amount of it and it’s pretty robust, showing how children use play to learn how the world works: What does a ball do? How does water behave? What happens if…?  If children did not learn through play, much of the research would have been impossible.

Secondly, you can observe children learning through play. In front of your very eyes. A kid who can’t post all the bricks in the right holes at the beginning of a play session, can do so at the end. A child who doesn’t know how to draw a cat when they sit down with the crayons, can do so a few minutes later.

Play is so obviously the primary vehicle for learning used by young children, that a randomised controlled trial of the importance of play in learning would be about as ethical as one investigating the importance of food for growth, or the need to hear talk to develop speech.

what about play at school?

But critics have another question: Children can play at home – why waste time playing in school when they could use that time to learn something useful, like reading, writing or arithmetic? Advocates for learning through play often argue that a child has to be developmentally ‘ready’ before they can successfully engage in such tasks, and play facilitates that development ‘readiness’. By developmentally ‘ready’, they’re not necessarily referring to some hypothetical, questionable Piagetian ‘stages’, but whether the child has developed the capability to carry out the educational tasks. You wouldn’t expect a six month-old to walk – their leg muscles and sense of balance wouldn’t be sufficiently well developed. Nor would you expect the average 18 month-old to read – they wouldn’t have the necessary language skills.

Critics might point out that a better use of time would be to teach the tasks directly. “These are the shapes you need to know about.” “This is how you draw a cat.” Why not ‘just tell them’ rather than spend all that time playing?

There are two main reasons why play is a good vehicle for learning at the Early Years stage. One is that young children are highly motivated to play. Play involves a great deal of trial-and-error, an essential mechanism for learning in many contexts. The variable reinforcement that happens during trial-and-error play is strongly motivating for mammals, and human beings are no exception.

The other reason is during play, there is a great deal of incidental learning going on. When posting bricks children learn about manual dexterity as well as about colour, number, texture, materials, shapes and angles. Drawing involves learning about shape, colour, 2-D representation of 3-D objects, and again, manual dexterity. Approached as play, both activities could also expand a child’s vocabulary and enable them to learn how to co-operate, collaborate or compete with others. Play offers a high learning return for a small investment of time and resources.

why not ‘just tell them’?

But isn’t ‘just telling them’ a more efficient use of time?   Sue Cowley, a keen advocate of the importance of play in Early Years, recently tweeted a link to an article in Psychology Today by Peter Gray, a researcher at Boston College. It’s entitled “Early Academic Training Produces Long-Term Harm”.

This is a pretty dramatic claim, and for me it raised a red flag – or at least an amber one. I’ve read through several longitudinal studies about children’s long-term development and they all have one thing in common; they show that the impact of early experiences (good and bad) is often moderated by later life events. ‘Delinquents’ settle down and become respectable married men with families; children from exemplary middle class backgrounds get in with the wrong crowd in their teens and go off the rails; the improvements in academic achievement resulting from a language programme in kindergarten have all but disappeared by third grade. The findings set out in Gray’s review article didn’t square with the findings of other longitudinal studies. Also, review articles can sometimes skate over crucial points in the methods used in studies that call the conclusions into question.

what the data tell us

So I was somewhat sceptical about Dr Gray’s claims – until I read the references (at least, three of the references – I couldn’t access the second). The studies he cites compared outcomes from three types of pre-school programme; High/Scope, direct instruction (including the DISTAR programme), and a traditional nursery pre-school curriculum. Some of the findings weren’t directly related to long-term outcomes but caught my attention:

  • In first, second and third grades, school districts used retention in grade rather than special education services for children experiencing learning difficulties (Marcon).
  • Transition (in this case grade 3 to 4) was followed by a dip in children’s academic performance (Marcon).
  • Because of the time that had elapsed since the original interventions, there had been ample opportunity for methodological criticisms to be addressed and resolved (Schweinhart & Weikart).
  • Mothers’ educational level was a significant factor (as in other studies) (Schweinhart & Weikart).
  • Small numbers of teachers were involved, so individual teachers could have had a disproportionate influence (Schweinhart & Weikart).
  • The lack of cited evidence for Common Core State Standards (Carlsson-Page et al).

Essentially, the studies cited by Dr Gray found that educational approaches featuring a significant element of child-initiated learning result in better long-term outcomes overall (including high school graduation rates) than those featuring direct instruction. The reasons aren’t entirely clear. Peter Gray and some of the researchers suggested the home visits that were a feature of all the programmes might have played a significant role; if parents had bought-in to a programme’s ethos (likely if there were regular home visits from teachers), children expected to focus on academic achievement at school and at home might have fewer opportunities for early incidental learning about social interaction that could shape their behaviour in adulthood.

The research findings provided an unexpected answer to a question I have repeatedly asked of proponents of Engelmann’s DISTAR programme (featured in one of the studies) but to which I’ve never managed to get a clear answer; what outcomes were there from the programme over the long-term?  Initially, children who had followed direct instruction programmes performed significantly better in academic tests than those who hadn’t, but the gains disappeared after a few years, and the long-term outcomes included more years in special education, and later in significantly more felony arrests and assaults with dangerous weapons.

This wasn’t what I was expecting. What I was expecting was the pattern that emerged from the Abecedarian study; that academic gains after early intervention peter out after a few years, but that there are marginal long-term benefits. Transient and marginal improvements are not to be sniffed at. ‘Falling behind’ early on at school can have a devastating impact on a child’s self-esteem, and only a couple of young people choosing college rather than teenage parenthood or petty crime can make a big difference to a neighbourhood.

The most likely reason for the tail-off in academic performance is that the programme was discontinued, but the overall worse outcomes for the direct instruction children than for those in the control group are counterintuitive.  Of course it doesn’t follow that direct instruction caused the worse outcomes. The results of the interventions are presented at the group level; it would be necessary to look at the pathways followed by individuals to identify the causes for them dropping out of high school or getting arrested.

conclusion

There’s no doubt that early direct instruction improves children’s academic performance in the short-term. That’s a desirable outcome, particularly for children who would otherwise ‘fall behind’. However, from these studies, direct instruction doesn’t appear to have the long-term impact sometimes claimed for it; that it will address the problem of ‘failing’ schools; that it will significantly reduce functional illiteracy; or that early intervention will eradicate the social problems that cause so much misery and perplex governments.  In fact, these studies suggest that direct instruction results in worse outcomes.  Hopefully, further research will tell us whether that is a valid finding, and if so why it happened.

I’ve just found a post by Greg Ashman drawing attention to a critique of the High/Scope studies.  Worth reading.  [edit 21/4/17]

References

Carlsson-Paige, N, McLaughlin, GB and Almon, JW. (2015).  “Reading Instruction in Kindergarten: Little to Gain and Much to Lose”.  Published online by the Alliance for Childhood. http://www.allianceforchildhood.org/sites/allianceforchildhood.org/files…

Gray, P. (2015). Early Academic Training Produces Long-Term Harm.  Psychology Today https://www.psychologytoday.com/blog/freedom-learn/201505/early-academic-training-produces-long-term-harm

Marcon, RA (2002). “Moving up the grades: Relationship between preschool model and later school success.” Early Childhood Research & Practice 4 (1). http://ecrp.uiuc.edu/v4n1/marcon.html.

Schweinhart, LJ and Weikart, DP (1997). “The High/Scope Pre- school Curriculum Comparison Study through age 23.” Early Childhood Research Quarterly, 12. pp. 117-143. https://pdfs.semanticscholar.org/c339/6f2981c0f60c9b33dfa18477b885c5697e1d.pdf

About the Author

You are reading

Freedom to Learn

Social Norms, Moral Judgments, and Irrational Parenting

From Chinese foot binding to today’s extreme constraints on children’s freedom.

Childrearing Beliefs Were Best Predictor of Trump Support

A poll with four weird questions helps explain Trump’s surprising victory.

A Frugal Man’s Guide to Happiness and Health

For me, the inexpensive ways to do things are also the healthiest and most fun.

Sue Cowley is a robust advocate of the importance of play in learning https://suecowley.wordpress.com/2014/08/09/early-years-play-is/

behavioural optometry: pros and cons

MUSEC is Macquarie University’s Special Educational Centre. Since 2005 it has been issuing one-page briefings on various topics relevant to special education; a brilliant idea and very useful for busy teachers. One of the drawbacks of a one-page briefing is that if the topic is a complex one, there might be space for a simple explanation and a couple of references only. The briefings get round that problem, in part, by putting relevant references on a central website.

Behavioural optometry is based on the assumption that some behavioural issues (in the broadest sense) are due to problems with the way the eyes function. This could include anything from poor convergence (eyes don’t focus together) to variations in processing visual information in different coloured lights. The theory is a plausible one; visual dysfunction can cause considerable discomfort and can affect balance and co-ordination, for example.

Behavioural optometrists are sometimes consulted if children have problems with reading, because reading requires fine-grained visual (and auditory) discrimination, and even small variations in the development of the visual system can cause problems for young children. One of the reasons systematic synthetic phonics programmes are so effective in helping young children learn to decode text is because they train children in making fine-grained distinctions between graphemes (and between phonemes). But phonics programmes cannot address all visual (or auditory) processing anomalies, which is the point where behavioural optometrists often come in.

The MUSEC briefing on behavioural optometry (Issue 33) draws on two references; a 2011 report by the American Academy of Paediatrics (AAP), and a 2009 review paper by Brendan Barrett, a professor of visual development at Bradford University.  Aspects of the briefing perplexed me.  I felt it didn’t accurately reflect the conclusions of the two references because it:

  • doesn’t discriminate between treatments
  • overlooks the expertise of behavioural optometrists
  • equates lack of evidence for efficacy with inefficacy
  • assumes that what is true for a large population must be true for individuals
  • gives misleading advice to readers.

Discrimination between treatments

In its second paragraph the briefing lists three types of treatment used by behavioural optometrists; lenses and prisms, coloured lenses or overlays, and vision therapy. But from paragraph four onwards, no distinction is made between treatments – they are all referred to as ‘behavioural optometry’ and evidence (for all behavioural optometry treatments presumably) is said to be ‘singularly lacking’. Since lenses and prisms are used in what Barrett calls traditional optometry (p.5), this generalization is self-evidently inaccurate. Nor does it reflect Barrett’s conclusions. Although he highlights the scarcity of evidence and lack of support for some treatments, he also refers to treatments developed by behavioural optometrists being adopted in mainstream practice and to evidence that supports claims involving convergence insufficiency, yoked prisms, and vision rehabilitation after brain disease/injury.

Expertise of behavioural optometrists

The briefing also appears to overlook the fact that behavioural optometrists are actually optometrists – a protected title, in the UK at least. As such, they are qualified to make independent professional judgments about the treatment of their patients. As Barrett points out, some of the controversies over treatments involve complex theoretical and technical issues; behavioural optometry isn’t the equivalent of Brain Gym. But teachers are unlikely to know that if they only read the briefing and not the references.

Lack of evidence for efficacy

Both references cited by the MUSEC briefing are reviews commissioned by professional bodies. Clearly, the American Academy of Pediatrics, the College of Optometrists or MUSEC cannot endorse or advocate treatments for which there is little or no evidence of efficacy. But individual practitioners are not issuing policy statements, they are treating individual patients. If they are using treatments for which a robust evidence base is lacking, that’s unsatisfactory, but a weak evidence base doesn’t mean that there is no evidence for efficacy, nor that the treatments in question are ineffective. Setting up RCTs of treatments for complex issues like ‘learning difficulties’ is challenging, expensive and time-consuming. As a parent, I would far rather my child try treatments that had a weak evidence base but were recommended by experienced practitioners, than wait for the Cochrane reviewers to finish a task that could take decades.

Populations vs individuals

The briefing paper says that “there is clear consensus among reading scientists that visual perception difficulties are rarely critical in reading difficulties and that the problem is typically more to do with language, specifically phonological processing.

Although this statement is right about the consensus and the role of phonological processing, one can’t assume that what’s true at a population level is true for every individual. Take, for example, convergence insufficiency (one of the areas where Barrett found evidence to support behavioural optometrists’ claims). According to the AAP report, the prevalence of convergence insufficiency is somewhere between 0.3% and 5% of the population (p.832).   So the probability of any given child having convergence insufficiency is low, but in the UK it still could affect up to 500,000 children. Although the report found no evidence that convergence insufficiency causes problems with decoding, comprehension or school achievement, it points out that it ‘can interfere with the ability to concentrate on print for a prolonged period of time’.   So even though in theory convergence insufficiency could be contributing to the difficulties of a quarter of the UK’s reluctant readers, it isn’t screened for in standard eye tests.

Advice to readers

The briefing recommends visual assessment for problems with acuity and refractive or ‘similar’ problems, but that’s not what the AAP recommends. It says:

Children with suspected learning disabilities in whom a vision problem is suspected by the child, parents, physicians, or educators should be seen by an ophthalmologist who has experience with the assessment and treatment of children, because some of these children may also have a treatable visual problem that accompanies or contributes to their primary reading or learning dysfunction.” (p. 829)

In the UK, that would require considerable persistence on the part of the child, parent or educator, although physicians might have more success.

The briefing also suggests an alternative to behavioural optometry; ‘explicit instruction in the specific areas causing difficulty’. Quite how ‘explicit instruction’ would improve problems with eye tracking, visual processing speed, visual sequential memory, visual discrimination, visual motor integration, visual spatial skills and rapid naming, never mind attention or dyspraxia where the difficulty is often discovered because the child is unable to carry out explicit instructions, is unclear.

Conclusion

I’m not claiming that behavioural optometry ‘does help children with reading difficulties’ because I don’t know whether it does or not. But that appears to be the nub of the problem – in the absence of evidence nobody knows whether it does or not. Nor which treatments help, if any. As the AAP paper says “Although it is prudent to be skeptical, especially with regard to prematurely disseminated therapies, it is important to also remain openminded.” (p.836)

I also had problems with the MUSEC briefing’s reading of Barrett’s conclusions. Although I wouldn’t go so far as to say the briefing is wrong (except perhaps about the lenses, and I’m not sure what it means by ‘explicit instruction’), its take-home message, for me, was that behavioural optometrists lack competence, that visual problems are unlikely to play any part in developmental abnormalities, and that if there are visual problems they will be limited to acuity and refractive or ‘similar’ factors. That’s not the message I got from either of the papers cited by the briefing. Obviously, on one side of A4, the authors couldn’t have covered all the relevant issues, but I felt that what they included and omitted could give the wrong impression to anyone unfamiliar with the issues.

References

American Academy of Pediatrics (2011). Joint technical report – Learning disabilities, dyslexia, and vision. Pediatrics, 127, e818-e856.

Barrett, B.T. (2009). A critical evaluation of the evidence supporting the practice of behavioural vision therapy. Ophthalmic and Physiological Optics, 29, 4-25.

going round in circles

Central to the Tiger Teachers’ model of cognitive science is the concept of cognitive load. Cognitive load refers to the amount of material that working memory is handling at any one time. It’s a concept introduced by John Sweller, a researcher frequently cited by the Tiger Teachers. Cognitive load is an important concept for education because human working memory capacity is very limited – we can think about only a handful of items at the same time. If students’ cognitive load is too high, they won’t be able to solve problems or will fail to learn some material.

I’ve had concerns about the Tiger Teachers’ interpretations of concepts from cognitive science, and about how they apply those concepts to their own learning, but until recently I hadn’t paid much attention to the way their students were being taught. I had little information about it for a start, and if it ‘worked’ for a particular group of teachers and students, I saw no reason to question it.

increasing cognitive load

The Michaela Community School recently blogged about solving problems involving circle theorems. Vince Ulam, a mathematician and maths teacher*, took issue with the diagrammatic representations of the problems.

The diagrams of the circles and triangles are clearly not accurate; they don’t claim to be. In an ensuing Twitter discussion, opinion was divided over whether or not the accuracy of diagrams mattered. Some people thought it didn’t matter if the diagrams were intended only as a representation of an algebraic or arithmetic problem. One teacher thought inaccurate diagrams would ensure the students didn’t measure angles or guess them.

The problem with the diagrams is not that they are imprecise – few people would quibble over a sketch diagram representing an angle of 28° that was actually 32°. It’s that they are so inaccurate as to be misleading. For example, there’s an obtuse angle that clearly isn’t obtuse, an angle of 71° is more acute than one of 28°, and a couple of isosceles triangles are scalene. As Vince points out, this makes it impossible for students to determine anything by inspection – an important feature of trigonometry. Diagrams with this level of inaccuracy also have implications for cognitive load, something that the Tiger Teachers are, rightly, keen to minimise.

My introduction to trigonometry at school was what the Tiger Teachers would probably describe as ‘traditional’. A sketch diagram illustrating a trigonometry problem was acceptable, but was expected to present a reasonably accurate representation of the problem. A diagram of an isosceles triangle might not be to scale, but it should be an isosceles triangle. An obtuse angle should be an obtuse angle, and an angle of 28° should not be larger than one of 71°.

Personally, I found some of the inaccurate diagrams so inaccurate as to be quite disconcerting. After all those years of trigonometry, the shapes of isosceles triangles, obtuse angles, and the relative sizes of angles of ~30° or ~70°, are burned into my brain, as the Tiger Teachers would no doubt expect them to be. So seeing a scalene triangle masquerading as an isosceles, an acute angle claiming to be 99°, and angles of 28° and 71° trading places, set up a somewhat unnerving Necker shift. In each case my brain started flipping between two contradictory representations; what the diagram was telling me and what the numbers were telling me.

It was the Stroop effect but with lines and numbers rather than letters and colours; and the Stroop effect increases cognitive load.  Even students accustomed to isosceles triangles not always looking like isosceles triangles would experience an increased cognitive load whilst looking at these diagrams, because they’ll have to process two competing representations; what their brain is telling them about the diagram and what it’s telling them about the numbers.  I had similar misgivings about the ‘CUDDLES’ approach used to teach French at Michaela.

CUDDLES and cognitive load

The ‘traditional’ approach to teaching foreign languages is to start with a bunch of simple nouns, adjectives and verbs, do a lot of rehearsal, and work up from there; that approach keeps cognitive load low from the get-go.   The Michaela approach seems to be to start with some complex language and break it down in a quasi-mathematical fashion involving underlining some letters, dotting others and telling stories about words.

Not only do students need to learn the words, what they represent and how French speakers use them, they have to learn a good deal of material extraneous to the language itself. I can see how the extraneous material acts as a belt-and-braces approach to ‘securing’ knowledge, but it must increase cognitive load because the students have to think about that as well as the language.

The Tiger Teacher’s approach to teaching is intriguing, but I still can’t figure out the underlying rationale; it certainly isn’t about reducing cognitive load.  Why does the Tiger Teachers’ approach to teaching matter?  Because now Nick Gibb is signed up to it, it will probably become educational policy, regardless of the validity of the evidence.

Note:  I resisted the temptation to call this post ‘non angeli sed anguli’.

*Amended from ‘maths teacher’ –  Old Andrew correctly pointed out that this was an assumption on my part. Vince Ulam assures me my assumption was correct.  I guess he should know.

the debating society

One of my concerns about the model of knowledge promoted by the Tiger Teachers is that it hasn’t been subjected to sufficient scrutiny.   A couple of days ago on Twitter I said as much.  Jonathan Porter, a teacher at the Michaela Community School, thought my criticism unfair because the school has invited critique by publishing a book and hosting two debating days. Another teacher recommended watching the debate between Guy Claxton and Daisy Christodoulou Sir Ken is right: traditional education kills creativity. She said it may not address my concerns about theory. She was right, it didn’t. But it did suggest a constructive way to extend the Tiger Teachers’ model of knowledge.

the debate

Guy, speaking for the motion and defending Sir Ken Robinson’s views, highlights the importance of schools developing students’ creativity, and answers the question ‘what is creativity?’ by referring to the findings of an OECD study; that creativity emerges from six factors – curiosity, determination, imagination, discipline, craftsmanship and collaboration. Daisy, opposing the motion, says that although she and Guy agree on the importance of creativity and its definition, they differ over the methods used in schools to develop it.

Daisy says Guy’s model involves students learning to be creative by practising being creative, which doesn’t make sense. It’s a valid point. Guy says knowledge is a necessary but not sufficient condition for developing creativity; other factors are involved. Another valid point. Both Daisy and Guy debate the motion but they approach it from very different perspectives, so they don’t actually rigorously test each other’s arguments.

Daisy’s model of creativity is a bottom-up one. Her starting point is how people form their knowledge and how that develops into creativity. Guy’s model, in contrast, is a top-down one; he points out that creativity isn’t a single thing, but emerges from several factors. In this post, I propose that Daisy and Guy are using the same model of creativity, but because Daisy’s focus is on one part and Guy’s on another, their arguments shoot straight past each other, and that in isolation, both perspectives are problematic.

Creativity is a complex construct, as Guy points out. A problem with his perspective is that the factors he found to be associated with creativity are themselves complex constructs. How does ‘curiosity’ manifest itself? Is it the same in everyone or does it vary from person to person? Are there multiple component factors associated with curiosity too? Can we ask the same questions about ‘imagination’? Daisy, in contrast, claims a central role for knowledge and deliberate practice. A problem with Daisy’s perspective is, as I’ve pointed out elsewhere, that her model of knowledge peters out when it comes to the complex cognition Guy refers to. With bit more information, Daisy and Guy could have done some joined-up thinking.  To me, the two models look like the representation below, the grey words and arrows indicating concepts and connections referred to but not explained in detail.

slide1

cognition and expertise

If I’ve understood it correctly, Daisy’s model of creativity is essentially this: If knowledge is firmly embedded in long-term memory (LTM) via lots of deliberate practice and organised into schemas, it results in expertise. Experts can retrieve their knowledge from LTM instantly and can apply it flexibly. In short, creativity is a feature of expertise.

Daisy makes frequent references to research; what scientists think, half a century of research, what all the research has shown. She names names; Herb Simon, Anders Ericsson, Robert Bjork. She reports research showing that expert chess players, football players or musicians don’t practise whole games or entire musical works – they practise short sequences repeatedly until they’ve overlearned them. That’s what enables experts to be creative.

Daisy’s model of expertise is firmly rooted in an understanding of cognition that emerged from artificial intelligence (AI) research in the 1950s and 1960s. At the time, researchers were aware that human cognition was highly complex and often seemed illogical.  Computer science offered an opportunity to find out more; by manipulating the data and rules fed into a computer, researchers could test different models of cognition that might explain how experts thought.

It was no good researchers starting with the most complex illogical thinking – because it was complex and illogical. It made more sense to begin with some simpler examples, which is why the AI researchers chose chess, sport and music as domains to explore. Expertise in these domains looks pretty complex, but the complexity has obvious limits because chess, sport and music have clear, explicit rules. There are thousands of ways you can configure chess pieces or football players and a ball during a game, but you can’t configure them any-old-how because chess and football have rules. Similarly, a musician can play a piece of music in many different ways, but they can’t play it any-old-how because then it wouldn’t be the same piece of music.

In chess, sport and music, experts have almost complete knowledge, clear explicit rules, and comparatively low levels of uncertainty.   Expert geneticists, doctors, sociologists, politicians and historians, in contrast, often work with incomplete knowledge, many of the domain ‘rules’ are unknown, and uncertainty can be very high. In those circumstances, expertise  involves more than simply overlearning a great many facts and applying them flexibly.

Daisy is right that expertise and creativity emerge from deliberate practice of short sequences – for those who play chess, sport or music. Chess, soccer and Beethoven’s piano concerto No. 5 haven’t changed much since the current rules were agreed and are unlikely to change much in future. But domains like medicine, economics and history still periodically undergo seismic shifts in the way whole areas of the domains are structured, as new knowledge comes to light.

This is the point at which Daisy’s and Guy’s models of creativity could be joined up.  I’m not suggesting some woolly compromise between the two. What I am suggesting is that research that followed the early AI work offers the missing link.

I think the missing link is the schema.   Daisy mentions schemata (or schemas if you prefer) but only in terms of arranging historical events chronologically. Joe Kirby in Battle Hymn of the Tiger Teachers also recognises that there can be an underlying schema in the way students are taught.  But the Tiger Teachers don’t explore the idea of the schema in any detail.

schemas, schemata

A schema is the way people mentally organise their knowledge. Some schemata are standardised and widely used – such as the periodic table or multiplication tables. Others are shared by many people, but are a bit variable – such as the Linnaean taxonomy of living organisms or the right/left political divide. But because schemata are constructed from the knowledge and experience of the individual, some are quite idiosyncratic. Many teachers will be familiar with students all taught the same material in the same way, but developing rather different understandings of it.

There’s been a fair amount of research into schemata. The schema was first proposed as a psychological concept by Jean Piaget*. Frederic Bartlett carried out a series of experiments in the 1930s demonstrating that people use schemata, and in the heyday of AI the concept was explored further by, for example, David Rumelhart, Marvin Minsky and Robert Axelrod. It later extended into script theory (Roger Schank and Robert Abelson), and how people form prototypes and categories (e.g. Eleanor Rosch, George Lakoff). The schema might be the missing link between Daisy’s and Guy’s models of creativity, but both models stop before they get there. Here’s how the cognitive science research allows them to be joined up.

Last week I finally got round to reading Jerry Fodor’s book The Modularity of Mind, published in 1983. By that time, cognitive scientists had built up a substantial body of evidence related to cognitive architecture. Although the evidence itself was generally robust, what it was saying about the architecture was ambiguous. It appeared to indicate that cognitive processes were modular, with specific modules processing specific types of information e.g. visual or linguistic. It also indicated that some cognitive processes operated across the board, e.g. problem-solving or intelligence. The debate had tended to be rather polarised.  What Fodor proposed was that cognition isn’t a case of either-or, but of both-and; that perceptual and linguistic processing is modular, but higher-level, more complex cognition that draws on modular information, is global.   His prediction turned out to be pretty accurate, which is why Daisy’s and Guy’s models can be joined up.

Fodor was familiar enough with the evidence to know that he was very likely to be on the right track, but his model of cognition is a complex one, and he knew he could have been wrong about some bits of it. So he deliberately exposes his model to the criticism of cognitive scientists, philosophers and anyone else who cared to comment, because that’s how the scientific method works. A hypothesis is tested. People try to falsify it. If they can’t, then the hypothesis signposts a route worth exploring further. If they can, then researchers don’t need to waste any more time exploring a dead end.

joined-up thinking

Daisy’s model of creativity has emerged from a small sub-field of cognitive science – what AI researchers discovered about expertise in domains with clear, explicit rules. She doesn’t appear to see the need to explore schemata in detail because the schemata used in chess, sport and music are by definition highly codified and widely shared.  That’s why the AI researchers chose them.  The situation is different in the sciences, humanities and arts where schemata are of utmost importance, and differences between them can be the cause of significant conflict.  Guy’s model originates in a very different sub-field of cognitive science – the application of high-level cognitive processes to education. Schemata are a crucial component; although Guy doesn’t explore them in this debate, his previous work indicates he’s very familiar with the concept.

Since the 1950s, cognitive science has exploded into a vast research field, encompassing everything from the dyes used to stain brain tissue, through the statistical analysis of brain scans, to the errors and biases that affect judgement and decision-making by experts. Obviously it isn’t necessary to know everything about cognitive science before you can apply it to teaching, but if you’re proposing a particular model of cognition, having an overview of the field and inviting critique of the model would help avoid unnecessary errors and disagreements.  In this debate, I suggest schemata are noticeable by their absence.

*First use of schema as a psychological concept is widely attributed to Piaget, but I haven’t yet been able to find a reference.

The Tiger Teachers and cognitive science

Cognitive science is a key plank in the Tiger Teachers’ model of knowledge. If I’ve understood it properly the model looks something like this:

Cognitive science has discovered that working memory has limited capacity and duration, so pupils can’t process large amounts of novel information. If this information is secured in long-term memory via spaced, interleaved practice, students can recall it instantly whenever they need it, freeing up working memory for thinking.

What’s wrong with that? Nothing, as it stands. It’s what’s missing that’s the problem.

Subject knowledge

One of the Tiger Teachers’ beefs about the current education system is its emphasis on transferable skills. They point out that skills are not universally transferable, many are subject-specific, and in order to develop expertise in higher-level skills novices need a substantial amount of subject knowledge. Tiger Teachers’ pupils are expected to pay attention to experts (their teachers) and memorise a lot of facts before they can comprehend, apply, analyse, synthesise or evaluate. The model is broadly supported by cognitive science and the Tiger Teachers apply it rigorously to children. But not to themselves, it seems.

For most Tiger Teachers cognitive science will be an unfamiliar subject area. That makes them (like most of us) cognitive science novices. Obviously they don’t need to become experts in cognitive science to apply it to their educational practice, but they do need the key facts and concepts and a basic overview of the field. The overview is important because they need to know how the facts fit together and the limitations of how they can be applied.   But with a few honourable exceptions (Daisy Christodoulou, David Didau and Greg Ashman spring to mind – apologies if I’ve missed anyone out), many Tiger Teachers don’t appear to have even thought about acquiring expertise, key facts and concepts or an overview. As a consequence facts are misunderstood or overlooked, principles from other knowledge domains are applied inappropriately, and erroneous assumptions made about how science works. Here are some examples (page numbers refer to Battle Hymn of the Tiger Teachers):

It’s a fact…

“Teachers’ brains work exactly the same way as pupils’” (p.177). No they don’t. Cognitive science (ironically) thinks that children’s brains begin by forming trillions of connections (synapses). Then through to early adulthood, synapses that aren’t used get pruned, which makes information processing more efficient. (There’s a good summary here.)  Pupils’ brains are as different to teachers’ brains as children’s bodies are different to adults’ bodies. Similarities don’t mean they’re identical.

Then there’s working memory. “As the cognitive scientist Daniel Willingham explains, we learn by transferring knowledge from the short-term memory to the long term memory” (p177). Well, kind of – if you assume that what Willingham explicitly describes as “just about the simplest model of the mind possible”  is an exhaustive model of memory. If you think that, you might conclude, wrongly, “the more knowledge we have in long-term memory, the more space we have in our working memory to process new information” (p.177). Or that “information cannot accumulate into long-term memory while working memory is being used” (p.36).

Long-term memory takes centre stage in the Tiger Teachers’ model of cognition. The only downside attributed to it is our tendency to forget things if we don’t revisit them (p.22). Other well-established characteristics of long-term memory – its unreliability, errors and biases – are simply overlooked, despite Daisy Christodoulou’s frequent citation of Daniel Kahneman whose work focused on those flaws.

With regard to transferable skills we’re told “cognitive scientist Herb Simon and his colleagues have cast doubt on the idea that there are any general or transferable cognitive skills” (p.17), when what they actually cast doubt on is the ideas that all skills are transferable or that none are.

The Michaela cognitive model is distinctly reductionist; “all there is to intelligence is the simple accrual and tuning of many small units of knowledge that in total produce complex cognition” (p.19). Then there’s “skills are simply just a composite of sequential knowledge – all skills can be broken down to irreducible pieces of knowledge” (p.161).

The statement about intelligence is a direct quote from John Anderson’s paper ‘A Simple Theory of Complex Cognition’ but Anderson isn’t credited, so you might not know he was talking about simple encodings of objects and transformations, and that by ‘intelligence’ he means how ants behave rather than IQ. I’ve looked at Daisy Christodoulou’s interpretation of Anderson’s model here.

The idea that intelligence and skills consist ‘simply just’ of units of knowledge ignores Anderson’s procedural rules and marginalises the role of the schema – the way people configure their knowledge. Joe Kirby mentions “procedural and substantive schemata” (p. 17), but seems to see them only in terms of how units of knowledge are configured for teaching purposes; “subject content knowledge is best organised into the most memorable schemata … chronological, cumulative schemata help pupils remember subject knowledge in the long term” (p.21). The concept of schemata as the way individuals, groups or entire academic disciplines configure their knowledge, that the same knowledge can be configured in different ways resulting in different meanings, or that configurations sometimes turn out to be profoundly wrong, doesn’t appear to feature in the Tiger Teachers’ model.

Skills: to transfer or not to transfer?

Tiger Teachers see higher-level skills as subject-specific. That hasn’t stopped them applying higher-level skills from one domain inappropriately to another. In her critique of Bloom’s taxonomy, Daisy Christodoulou describes it as a ‘metaphor’ for the relationship between knowledge and skills. She refers to two other metaphors; ED Hirsch’s scrambled egg and Joe Kirby’s double helix (Seven Myths p.21).  Daisy, Joe and ED teach English, and metaphors are an important feature in English literature. Scientists do use metaphors, but they use analogies more often, because in the natural world patterns often repeat themselves at different levels of abstraction. Daisy, Joe and ED are right to complain about Bloom’s taxonomy being used to justify divorcing skills from knowledge. And the taxonomy itself might be wrong or misleading.   But it is a taxonomy and it is based on an important scientific concept – levels of abstraction – so should be critiqued as such, not as if it were a device used by a novelist.

Not all evidence is equal

A major challenge for novices is what criteria they can use to decide whether or not factual information is valid. They can’t use their overview of a subject area if they don’t have one. They can’t weigh up one set of facts against another if they don’t know enough facts. So Tiger Teachers who are cognitive science novices have to fall back on the criteria ED Hirsch uses to evaluate psychology – the reputation of researchers and consensus. Those might be key criteria in evaluating English literature, but they’re secondary issues for scientific research, and for good reason.

Novices then have to figure out how to evaluate the reputation of researchers and consensus. The Tiger Teachers struggle with reputation. Daniel Willingham and Paul Kirschner are cited more frequently than Herb Simon, but with all due respect to Willingham and Kirschner, they’re not quite in the same league. Other key figures don’t get a mention.  When asked what was missing from the Tiger Teachers’ presentations at ResearchEd, I suggested, for starters, Baddeley and Hitch’s model of working memory. It’s been a dominant model for 40 years and has the rare distinction of being supported by later biological research. But it’s mentioned only in an endnote in Willingham’s Why Don’t Students Like School and in Daisy’s Seven Myths about Education. I recommended inviting Alan Baddeley to speak at ResearchEd – he’s a leading authority on memory after all.   One of the teachers said he’d never even heard of him. So why was that teacher doing a presentation on memory at a national education conference?

The Tiger Teachers also struggle with consensus. Joe Kirby emphasises the length of time an idea has been around and the number of studies that support it (pp.22-3), overlooking the fact that some ideas can dominate a field for decades, be supported by hundreds of studies and then turn out to be profoundly wrong; theories about how brains work are a case in point.   Scientific theory doesn’t rely on the quantity of supporting evidence; it relies on an evaluation of all relevant evidence – supporting and contradictory – and takes into account the quality of that evidence as well.  That’s why you need a substantial body of knowledge before you can evaluate it.

The big picture

For me, Battle Hymn painted a clearer picture of the Michaela Community School than I’d been able to put together from blog posts and visitors’ descriptions. It persuaded me that Michaela’s approach to behaviour management is about being explicit and consistent, rather than simply being ‘strict’. I think having a week’s induction for new students and staff (‘bootcamp’) is a great idea. A systematic, rigorous approach to knowledge is vital and learning by rote can be jolly useful. But for me, those positives were all undermined by the Tiger Teachers’ approach to their own knowledge.  Omitting key issues in discussions of Rousseau’s ideas, professional qualifications or the special circumstances of schools in coastal and rural areas, is one thing. Pontificating about cognitive science and then ignoring what it says is quite another.

I can understand why Tiger Teachers want to share concepts like the limited capacity of working memory and skills not being divorced from knowledge.  Those concepts make sense of problems and have transformed their teaching.  But for many Tiger Teachers, their knowledge of cognitive science appears to be based on a handful of poorly understood factoids acquired second or third hand from other teachers who don’t have a good grasp of the field either. Most teachers aren’t going to know much about cognitive science; but that’s why most teachers don’t do presentations about it at national conferences or go into print to share their flimsy knowledge about it.  Failing to acquire a substantial body of knowledge about cognitive science makes its comprehension, application, analysis, synthesis and evaluation impossible.  The Tiger Teachers’ disregard for principles they claim are crucial is inconsistent, disingenuous, likely to lead to significant problems, and sets a really bad example for pupils. The Tiger Teachers need to re-write some of the lyrics of their Battle Hymn.

References

Birbalsingh, K (2016).  Battle Hymn of the Tiger Teachers: The Michaela Way.  John Catt Educational.

Christodoulou, D (2014).  Seven Myths about Education.  Routledge.