play or direct instruction in early years?

One of the challenges levelled at advocates of the importance of play for learning in the Early Years Foundation Stage (EYFS) has been the absence of solid evidence for its importance. Has anyone ever tested this theory? Where are the randomised controlled trials?

The assumption that play is an essential vehicle for learning is widespread and has for many years dominated approaches to teaching young children. But is it anything more than an assumption?  I can understand why critics have doubts.  After all, EY teachers tend to say “Of course play is important. Why would you question that?” rather than “Of course play is important (Smith & Jones, 1943; Blenkinsop & Tompkinson, 1972).”  I think there are two main reasons why EY teachers tend not to cite the research.

why don’t EY teachers cite the research?

First, the research about play is mainly from the child development literature rather than the educational literature. There’s a vast amount of it and it’s pretty robust, showing how children use play to learn how the world works: What does a ball do? How does water behave? What happens if…?  If children did not learn through play, much of the research would have been impossible.

Secondly, you can observe children learning through play. In front of your very eyes. A kid who can’t post all the bricks in the right holes at the beginning of a play session, can do so at the end. A child who doesn’t know how to draw a cat when they sit down with the crayons, can do so a few minutes later.

Play is so obviously the primary vehicle for learning used by young children, that a randomised controlled trial of the importance of play in learning would be about as ethical as one investigating the importance of food for growth, or the need to hear talk to develop speech.

what about play at school?

But critics have another question: Children can play at home – why waste time playing in school when they could use that time to learn something useful, like reading, writing or arithmetic? Advocates for learning through play often argue that a child has to be developmentally ‘ready’ before they can successfully engage in such tasks, and play facilitates that development ‘readiness’. By developmentally ‘ready’, they’re not necessarily referring to some hypothetical, questionable Piagetian ‘stages’, but whether the child has developed the capability to carry out the educational tasks. You wouldn’t expect a six month-old to walk – their leg muscles and sense of balance wouldn’t be sufficiently well developed. Nor would you expect the average 18 month-old to read – they wouldn’t have the necessary language skills.

Critics might point out that a better use of time would be to teach the tasks directly. “These are the shapes you need to know about.” “This is how you draw a cat.” Why not ‘just tell them’ rather than spend all that time playing?

There are two main reasons why play is a good vehicle for learning at the Early Years stage. One is that young children are highly motivated to play. Play involves a great deal of trial-and-error, an essential mechanism for learning in many contexts. The variable reinforcement that happens during trial-and-error play is strongly motivating for mammals, and human beings are no exception.

The other reason is during play, there is a great deal of incidental learning going on. When posting bricks children learn about manual dexterity as well as about colour, number, texture, materials, shapes and angles. Drawing involves learning about shape, colour, 2-D representation of 3-D objects, and again, manual dexterity. Approached as play, both activities could also expand a child’s vocabulary and enable them to learn how to co-operate, collaborate or compete with others. Play offers a high learning return for a small investment of time and resources.

why not ‘just tell them’?

But isn’t ‘just telling them’ a more efficient use of time?   Sue Cowley, a keen advocate of the importance of play in Early Years, recently tweeted a link to an article in Psychology Today by Peter Gray, a researcher at Boston College. It’s entitled “Early Academic Training Produces Long-Term Harm”.

This is a pretty dramatic claim, and for me it raised a red flag – or at least an amber one. I’ve read through several longitudinal studies about children’s long-term development and they all have one thing in common; they show that the impact of early experiences (good and bad) is often moderated by later life events. ‘Delinquents’ settle down and become respectable married men with families; children from exemplary middle class backgrounds get in with the wrong crowd in their teens and go off the rails; the improvements in academic achievement resulting from a language programme in kindergarten have all but disappeared by third grade. The findings set out in Gray’s review article didn’t square with the findings of other longitudinal studies. Also, review articles can sometimes skate over crucial points in the methods used in studies that call the conclusions into question.

what the data tell us

So I was somewhat sceptical about Dr Gray’s claims – until I read the references (at least, three of the references – I couldn’t access the second). The studies he cites compared outcomes from three types of pre-school programme; High/Scope, direct instruction (including the DISTAR programme), and a traditional nursery pre-school curriculum. Some of the findings weren’t directly related to long-term outcomes but caught my attention:

  • In first, second and third grades, school districts used retention in grade rather than special education services for children experiencing learning difficulties (Marcon).
  • Transition (in this case grade 3 to 4) was followed by a dip in children’s academic performance (Marcon).
  • Because of the time that had elapsed since the original interventions, there had been ample opportunity for methodological criticisms to be addressed and resolved (Schweinhart & Weikart).
  • Mothers’ educational level was a significant factor (as in other studies) (Schweinhart & Weikart).
  • Small numbers of teachers were involved, so individual teachers could have had a disproportionate influence (Schweinhart & Weikart).
  • The lack of cited evidence for Common Core State Standards (Carlsson-Page et al).

Essentially, the studies cited by Dr Gray found that educational approaches featuring a significant element of child-initiated learning result in better long-term outcomes overall (including high school graduation rates) than those featuring direct instruction. The reasons aren’t entirely clear. Peter Gray and some of the researchers suggested the home visits that were a feature of all the programmes might have played a significant role; if parents had bought-in to a programme’s ethos (likely if there were regular home visits from teachers), children expected to focus on academic achievement at school and at home might have fewer opportunities for early incidental learning about social interaction that could shape their behaviour in adulthood.

The research findings provided an unexpected answer to a question I have repeatedly asked of proponents of Engelmann’s DISTAR programme (featured in one of the studies) but to which I’ve never managed to get a clear answer; what outcomes were there from the programme over the long-term?  Initially, children who had followed direct instruction programmes performed significantly better in academic tests than those who hadn’t, but the gains disappeared after a few years, and the long-term outcomes included more years in special education, and later in significantly more felony arrests and assaults with dangerous weapons.

This wasn’t what I was expecting. What I was expecting was the pattern that emerged from the Abecedarian study; that academic gains after early intervention peter out after a few years, but that there are marginal long-term benefits. Transient and marginal improvements are not to be sniffed at. ‘Falling behind’ early on at school can have a devastating impact on a child’s self-esteem, and only a couple of young people choosing college rather than teenage parenthood or petty crime can make a big difference to a neighbourhood.

The most likely reason for the tail-off in academic performance is that the programme was discontinued, but the overall worse outcomes for the direct instruction children than for those in the control group are counterintuitive.  Of course it doesn’t follow that direct instruction caused the worse outcomes. The results of the interventions are presented at the group level; it would be necessary to look at the pathways followed by individuals to identify the causes for them dropping out of high school or getting arrested.

conclusion

There’s no doubt that early direct instruction improves children’s academic performance in the short-term. That’s a desirable outcome, particularly for children who would otherwise ‘fall behind’. However, from these studies, direct instruction doesn’t appear to have the long-term impact sometimes claimed for it; that it will address the problem of ‘failing’ schools; that it will significantly reduce functional illiteracy; or that early intervention will eradicate the social problems that cause so much misery and perplex governments.  In fact, these studies suggest that direct instruction results in worse outcomes.  Hopefully, further research will tell us whether that is a valid finding, and if so why it happened.

I’ve just found a post by Greg Ashman drawing attention to a critique of the High/Scope studies.  Worth reading.  [edit 21/4/17]

References

Carlsson-Paige, N, McLaughlin, GB and Almon, JW. (2015).  “Reading Instruction in Kindergarten: Little to Gain and Much to Lose”.  Published online by the Alliance for Childhood. http://www.allianceforchildhood.org/sites/allianceforchildhood.org/files…

Gray, P. (2015). Early Academic Training Produces Long-Term Harm.  Psychology Today https://www.psychologytoday.com/blog/freedom-learn/201505/early-academic-training-produces-long-term-harm

Marcon, RA (2002). “Moving up the grades: Relationship between preschool model and later school success.” Early Childhood Research & Practice 4 (1). http://ecrp.uiuc.edu/v4n1/marcon.html.

Schweinhart, LJ and Weikart, DP (1997). “The High/Scope Pre- school Curriculum Comparison Study through age 23.” Early Childhood Research Quarterly, 12. pp. 117-143. https://pdfs.semanticscholar.org/c339/6f2981c0f60c9b33dfa18477b885c5697e1d.pdf

About the Author

You are reading

Freedom to Learn

Social Norms, Moral Judgments, and Irrational Parenting

From Chinese foot binding to today’s extreme constraints on children’s freedom.

Childrearing Beliefs Were Best Predictor of Trump Support

A poll with four weird questions helps explain Trump’s surprising victory.

A Frugal Man’s Guide to Happiness and Health

For me, the inexpensive ways to do things are also the healthiest and most fun.

Sue Cowley is a robust advocate of the importance of play in learning https://suecowley.wordpress.com/2014/08/09/early-years-play-is/

Advertisements

is systematic synthetic phonics generating neuromyths?

A recent Twitter discussion about systematic synthetic phonics (SSP) was sparked by a note to parents of children in a reception class, advising them what to do if their children got stuck on a word when reading. The first suggestion was “encourage them to sound out unfamiliar words in units of sound (e.g. ch/sh/ai/ea) and to try to blend them”. If that failed “can they use the pictures for any clues?” Two other strategies followed. The ensuing discussion began by questioning the wisdom of using pictures for clues and then went off at many tangents – not uncommon in conversations about SSP.
richard adams reading clues

SSP proponents are, rightly, keen on evidence. The body of evidence supporting SSP is convincing but it’s not the easiest to locate; much of the research predates the internet by decades or is behind a paywall. References are often to books, magazine articles or anecdote; not to be discounted, but not what usually passes for research. As a consequence it’s quite a challenge to build up an overview of the evidence for SSP that’s free of speculation, misunderstandings and theory that’s been superseded. The tangents that came up in this particular discussion are, I suggest, the result of assuming that if something is true for SSP in particular it must also be true for reading, perception, development or biology in general. Here are some of the inferences that came up in the discussion.

You can’t guess a word from a picture
Children’s books are renowned for their illustrations. Good illustrations can support or extend the information in the text, showing readers what a chalet, a mountain stream or a pine tree looks like, for example. Author and artist usually have detailed discussions about illustrations to ensure that the book forms an integrated whole and is not just a text with embellishments.

If the child is learning to read, pictures can serve to focus attention (which could be wandering anywhere) on the content of the text and can have a weak priming effect, increasing the likelihood of the child accessing relevant words. If the picture shows someone climbing a mountain path in the snow, the text is unlikely to contain words about sun, sand and ice-creams.

I understand why SSP proponents object to the child being instructed to guess a particular word by looking at a picture; the guess is likely to be wrong and the child distracted from decoding the word. But some teachers don’t seem to be keen on illustrations per se. As one teacher put it “often superficial time consuming detract from learning”.

Cues are clues are guesswork
The note to parents referred to ‘clues’ in the pictures. One contributor cited a blogpost that claimed “with ‘mixed methods’ eyes jump around looking for cues to guess from”. Clues and cues are often used interchangeably in discussions about phonics on social media. That’s understandable; the words have similar meanings and a slip on the keyboard can transform one into the other. But in a discussion about reading methods, the distinction between guessing, clues and cues is an important one.

Guessing involves drawing conclusions in the absence of enough information to give you a good chance of being right; it’s haphazard, speculative. A clue is a piece of information that points you in a particular direction. A cue has a more specific meaning depending on context; e.g. theatrical cues, social cues, sensory cues. In reading research, a cue is a piece of information about something the observer is interested in or a property of a thing to be attended to. It could be the beginning sound or end letter of a word, or an image representing the word. Cues are directly related to the matter in hand, clues are more indirectly related, guessing is a stab in the dark.

The distinction is important because if teachers are using the terms cue and clue interchangeably and assuming they both involve guessing there’s a risk they’ll mistakenly dismiss references to ‘cues’ in reading research as guessing or clues, which they are not.

Reading isn’t natural
Another distinction that came up in the discussion was the idea of natural vs. non-natural behaviours. One argument for children needing to be actively taught to read rather than picking it up as they go along is that reading, unlike walking and talking, isn’t a ‘natural’ skill. The argument goes that reading is a relatively recent technological development so we couldn’t possibly have evolved mechanisms for reading in the same way as we have evolved mechanisms for walking and talking. One proponent of this idea is Diane McGuinness, an influential figure in the world of synthetic phonics.

The argument rests on three assumptions. The first is that we have evolved specific mechanisms for walking and talking but not for reading. The ideas that evolution has an aim or purpose and that if everybody does something we must have evolved a dedicated mechanism to do it, are strongly contested by those who argue instead that we can do what our anatomy and physiology enable us to do (see arguments over Chomsky’s linguistic theory). But you wouldn’t know about that long-standing controversy from reading McGuinness’s books or comments from SSP proponents.

The second assumption is that children learn to walk and talk without much effort or input from others. One teacher called the natural/non-natural distinction “pretty damn obvious”. But sometimes the pretty damn obvious isn’t quite so obvious when you look at what’s actually going on. By the time they start school, the average child will have rehearsed walking and talking for thousands of hours. And most toddlers experience a considerable input from others when developing their walking and talking skills even if they don’t have what one contributor referred to as a “WEIRDo Western mother”. Children who’ve experienced extreme neglect (such as those raised in the notorious Romanian orphanages) tend to show significant developmental delays.

The third assumption is that learning to use technological developments requires direct instruction. Whether it does or not depends on the complexity of the task. Pointy sticks and heavy stones are technologies used in foraging and hunting, but most small children can figure out for themselves how to use them – as do chimps and crows. Is the use of sticks and stones by crows, chimps or hunter-gatherers natural or non-natural? A bicycle is a man-made technology more complex than sticks and stones, but most people are able to figure out how to ride a bike simply by watching others do it, even if a bit of practice is needed before they can do it themselves. Is learning to ride a bike with a bit of support from your mum or dad natural or non-natural?

Reading English is a more complex task than riding a bike because of the number of letter-sound correspondences. You’d need a fair amount of watching and listening to written language being read aloud to be able to read for yourself. And you’d need considerable instruction and practice before being able to fly a fighter jet because the technology is massively more complex than that involved in bicycles and alphabetic scripts.

One teacher asked “are you really going to go for the continuum fallacy here?” No idea why he considers a continuum a fallacy. In the natural/non-natural distinction used by SSP proponents there are three continua involved;

• the complexity of the task
• the length of rehearsal time required to master the task, and
• the extent of input from others that’s required.

Some children learn to read simply by being read to, reading for themselves and asking for help with words they don’t recognise. But because reading is a complex task, for most children learning to read by immersion like that would take thousands of hours of rehearsal. It makes far more sense to cut to the chase and use explicit instruction. In principle, learning to fly a fighter jet would be possible through trial-and-error, but it would be a stupidly costly approach to training pilots.

Technology is non-biological
I was told by several teachers that reading, riding a bike and flying an aircraft weren’t biological functions. I fail to see how they can’t be, since all involve human beings using their brain and body. It then occurred to me that the teachers are equating ‘biological’ with ‘natural’ or with the human body alone. In other words, if you acquire a skill that involves only body parts (e.g. walking or talking) it’s biological. If it involves anything other than a body part it’s not biological. Not sure where that leaves hunting with wooden spears, making baskets or weaving woolen fabric using a wooden loom and shuttle.

Teaching and learning are interchangeable
Another tangent was whether or not learning is involved in sleeping, eating and drinking. I contended that it is; newborns do not sleep, eat or drink in the same way as most of them will be sleeping, eating or drinking nine months later. One teacher kept telling me they don’t need to be taught to do those things. I can see why teachers often conflate teaching and learning, but they are not two sides of the same coin. You can teach children things but they might fail to learn them. And children can learn things that nobody has taught them. It’s debatable whether or not parents shaping a baby’s sleeping routine, spoon feeding them or giving them a sippy cup instead of a bottle count as teaching, but it’s pretty clear there’s a lot of learning going on.

What’s true for most is true for all
I was also told by one teacher that all babies crawl (an assertion he later modified) and by a school governor that they can all suckle (an assertion that wasn’t modified). Sweeping generalisations like this coming from people working in education is worrying. Children vary. They vary a lot. Even if only 0.1% of children do or don’t do something, that would involve 8 000 children in English schools. Some and most are not all or none and teachers of all people should be aware of that.

A core factor in children learning to read is the complexity of the task. If the task is a complex one, like reading, most children are likely to learn more quickly and effectively if you teach them explicitly. You can’t infer from that that all children are the same, they all learn in the same way or that teaching and learning are two sides of the same coin. Nor can you infer from a tenuous argument used to justify the use of SSP that distinctions between natural and non-natural or biological and technological are clear, obvious, valid or helpful. The evidence that supports SSP is the evidence that supports SSP. It doesn’t provide a general theory for language, education or human development.

seven myths about education: finally…

When I first heard about Daisy Christodoulou’s myth-busting book in which she adopts an evidence-based approach to education theory, I assumed that she and I would see things pretty much the same way. It was only when I read reviews (including Daisy’s own summary) that I realised we’d come to rather different conclusions from what looked like the same starting point in cognitive psychology. I’ve been asked several times why, if I have reservations about the current educational orthodoxy, think knowledge is important, don’t have a problem with teachers explaining things and support the use of systematic synthetic phonics, I’m critical of those calling for educational reform rather than those responsible for a system that needs reforming. The reason involves the deep structure of the models, rather than their surface features.

concepts from cognitive psychology

Central to Daisy’s argument is the concept of the limited capacity of working memory. It’s certainly a core concept in cognitive psychology. It explains not only why we can think about only a few things at once, but also why we oversimplify and misunderstand, are irrational, are subject to errors and biases and use quick-and-dirty rules of thumb in our thinking. And it explains why an emphasis on understanding at the expense of factual information is likely to result in students not knowing much and, ironically, not understanding much either.

But what students are supposed to learn is only one of the streams of information that working memory deals with; it simultaneously processes information about students’ internal and external environment. And the limited capacity of working memory is only one of many things that impact on learning; a complex array of environmental factors is also involved. So although you can conceptually isolate the material students are supposed to learn and the limited capacity of working memory, in the classroom neither of them can be isolated from all the other factors involved. And you have to take those other factors into account in order to build a coherent, workable theory of learning.

But Daisy doesn’t introduce only the concept of working memory. She also talks about chunking, schemata and expertise. Daisy implies (although she doesn’t say so explicitly) that schemata are to facts what chunking is to low-level data. That just as students automatically chunk low-level data they encounter repeatedly, so they will automatically form schemata for facts they memorise, and the schemata will reduce cognitive load in the same way that chunking does (p.20). That’s a possibility, because the brain appears to use the same underlying mechanism to represent associations between all types of information – but it’s unlikely. We know that schemata vary considerably between individuals, whereas people chunk information in very similar ways. That’s not surprising if the information being chunked is simple and highly consistent, whereas schemata often involve complex, inconsistent information.

Experimental work involving priming suggests that schemata increase the speed and reliability of access to associated ideas and that would reduce cognitive load, but students would need to have the schemata that experts use explained to them in order to avoid forming schemata of their own that were insufficient or misleading. Daisy doesn’t go into detail about deep structure or schemata, which I think is an oversight, because the schemata students use to organise facts are crucial to their understanding of how the facts relate to each other.

migrating models

Daisy and teachers taking a similar perspective frequently refer approvingly to ‘traditional’ approaches to education. It’s been difficult to figure out exactly what they mean. Daisy focuses on direct instruction and memorising facts, Old Andrew’s definition is a bit broader and Robert Peal’s appears to include cultural artefacts like smart uniforms and school songs. What they appear to have in common is a concept of education derived from the behaviourist model of learning that dominated psychology in the inter-war years. In education it focused on what was being learned; there was little consideration of the broader context involving the purpose of education, power structures, socioeconomic factors, the causes of learning difficulties etc.

Daisy and other would-be reformers appear to be trying to update the behaviourist model of education with concepts that, ironically, emerged from cognitive psychology not long after it switched focus from behaviourist model of learning to a computational one; the point at which the field was first described as ‘cognitive’. The concepts the educational reformers focus on fit the behaviourist model well because they are strongly mechanistic and largely context-free. The examples that crop up frequently in the psychology research Daisy cites usually involve maths, physics and chess problems. These types of problems were chosen deliberately by artificial intelligence researchers because they were relatively simple and clearly bounded; the idea was that once the basic mechanism of learning had been figured out, the principles could then be extended to more complex, less well-defined problems.

Researchers later learned a good deal about complex, less well-defined problems, but Daisy doesn’t refer to that research. Nor do any of the other proponents of educational reform. What more recent research has shown is that complex, less well-defined knowledge is organised by the brain in a different way to simple, consistent information. So in cognitive psychology the computational model of cognition has been complemented by a constructivist one, but it’s a different constructivist model to the social constructivism that underpins current education theory. The computational model never quite made it across to education, but early constructivist ideas did – in the form of Piaget’s work. At that point, education theory appears to have grown legs and wandered off in a different direction to cognitive psychology. I agree with Daisy that education theorists need to pay attention to findings from cognitive psychology, but they need to pay attention to what’s been discovered in the last half century not just to the computational research that superseded behaviourism.

why criticise the reformers?

So why am I critical of the reformers, but not of the educational orthodoxy? When my children started school, they, and I, were sometimes perplexed by the approaches to learning they encountered. Conversations with teachers painted a picture of educational theory that consisted of a hotch-potch of valid concepts, recent tradition, consequences of policy decisions and ideas that appeared to have come from nowhere like Brain Gym and Learning Styles. The only unifying feature I could find was a social constructivist approach and even on that opinions seemed to vary. It was difficult to tell what the educational orthodoxy was, or even if there was one at all. It’s difficult to critique a model that might not be a model. So I perked up when I heard about teachers challenging the orthodoxy using the findings from scientific research and calling for an evidence-based approach to education.

My optimism was short-lived. Although the teachers talked about evidence from cognitive psychology and randomised controlled trials, the model of learning they were proposing appeared as patchy, incomplete and incoherent as the model they were criticising – it was just different. So here are my main reservations about the educational reformers’ ideas:

1. If mainstream education theorists aren’t aware of working memory, chunking, schemata and expertise, that suggests there’s a bigger problem than just their ignorance of these particular concepts. It suggests that they might not be paying enough attention to developments in some or all of the knowledge domains their own theory relies on. Knowing about working memory, chunking, schemata and expertise isn’t going to resolve that problem.

2. If teachers don’t know about working memory, chunking, schemata and expertise, that suggests there’s a bigger problem than just their ignorance of these particular concepts. It suggests that teacher training isn’t providing teachers with the knowledge they need. To some extent this would be an outcome of weaknesses in educational theory, but I get the impression that trainee teachers aren’t expected or encouraged to challenge what they’re taught. Several teachers who’ve recently discovered cognitive psychology have appeared rather miffed that they hadn’t been told about it. They were all Teach First graduates; I don’t know if that’s significant.

3. A handful of concepts from cognitive psychology doesn’t constitute a robust enough foundation for developing a pedagogical approach or designing a curriculum. Daisy essentially reiterates what Daniel Willingham has to say about the breadth and depth of the curriculum in Why Don’t Students Like School?. He’s a cognitive psychologist and well-placed to show how models of cognition could inform education theory. But his book isn’t about the deep structure of theory, it’s about applying some principles from cognitive psychology in the classroom in response to specific questions from teachers. He explores ideas about pedagogy and the curriculum, but that’s as far as it goes. Trying to develop a model of pedagogy and design a curriculum based on a handful of principles presented in a format like this is like trying to devise courses of treatment and design a health service based on the information gleaned from a GP’s problem page in a popular magazine. But I might be being too charitable; Willingham is a trustee of the Core Knowledge Foundation, after all.

4. Limited knowledge Rightly, the reforming teachers expect students to acquire extensive factual knowledge and emphasise the differences between experts and novices. But Daisy’s knowledge of cognitive psychology appears to be limited to a handful of principles discovered over thirty years ago. She, Robert Peal and Toby Young all quote Daniel Willingham on research in cognitive psychology during the last thirty years, but none of them, Willingham included, tell us what it is. If they did, it would show that the principles they refer to don’t scale up when it comes to complex knowledge. Nor do most of the teachers writing about educational reform appear to have much teaching experience. That doesn’t mean they are wrong, but it does call into question the extent of their expertise relating to education.

Some of those supporting Daisy’s view have told me they are aware that they don’t know much about cognitive psychology, but have argued that they have to start somewhere and it’s important that teachers are made aware of concepts like the limits of working memory. That’s fine if that’s all they are doing, but it’s not. Redesigning pedagogy and the curriculum on the basis of a handful of facts makes sense if you think that what’s important is facts and that the brain will automatically organise those facts into a coherent schema. The problem is of course that that rarely happens in the absence of an overview of all the relevant facts and how they fit together. Cognitive psychology, like all other knowledge domains, has incomplete knowledge but it’s not incomplete in the same way as the reforming teachers’ knowledge. This is classic Sorcerer’s Apprentice territory; a little knowledge, misapplied, can do a lot of damage.

5. Evaluating evidence Then there’s the way evidence is handled. Evidence-based knowledge domains have different ways of evaluating evidence, but they all evaluate it. That means weighing up the pros and cons, comparing evidence for and against competing hypotheses and so on. Evaluating evidence does not mean presenting only the evidence that supports whatever view you want to get across. That might be a way of making your case more persuasive, but is of no use to anyone who wants to know about the reliability of your hypothesis or your evidence. There might be a lot of evidence telling you your hypothesis is right – but a lot more telling you it’s wrong. But Daisy, Robert Peal and Toby Young all present supporting evidence only. They make no attempt to test the hypotheses they’re proposing or the evidence cited, and much of the evidence is from secondary sources – with all due respect to Daniel Willingham, just because he says something doesn’t mean that’s all there is to say on the matter.

cargo-cult science

I suggested to a couple of the teachers who supported Daisy’s model that ironically it resembled Feynman’s famous cargo-cult analogy (p. 97). They pointed out that the islanders were using replicas of equipment, whereas the concepts from cognitive psychology were the real deal. I suggest that even the Americans had left their equipment on the airfield and the islanders knew how to use it, that wouldn’t have resulted in planes bringing in cargo – because there were other factors involved.

My initial response to reading Seven Myths about Education was one of frustration that despite making some good points about the educational orthodoxy and cognitive psychology, Daisy appeared to have got hold of the wrong ends of several sticks. This rapidly changed to concern that a handful of misunderstood concepts is being used as ‘evidence’ to support changes in national education policy.

In Michael Gove’s recent speech at the Education Reform Summit, he refers to the “solidly grounded research into how children actually learn of leading academics such as ED Hirsch or Daniel T Willingham”. Daniel Willingham has published peer-reviewed work, mainly on procedural learning, but I could find none by ED Hirsch. It would be interesting to know what the previous Secretary of State for Education’s criteria for ‘solidly grounded research’ and ‘leading academic’ were. To me the educational reform movement doesn’t look like an evidence-based discipline but bears all the hallmarks of an ideological system looking for evidence that affirms its core beliefs. This is no way to develop public policy. Government should know better.

the MUSEC briefings and Direct Instruction

Yesterday, I got involved in a discussion on Twitter about Direct Instruction (DI). The discussion was largely about what I had or hadn’t said about DI. Twitter isn’t the best medium for discussing anything remotely complex, but there’s something about DI that brings out the pedant in people, me included.

The discussion, if you can call it that, was triggered by a tweet about the most recent MUSEC briefing. The briefings, from Macquarie University Special Education Centre, are a great idea. A one-page round-up of the evidence relating to a particular mode of teaching or treatment used in special education is exactly the sort of resource I’d use often. So why the discussion about this one?

the MUSEC briefings

I’ve bumped into the briefings before. I read one a couple of years ago on the recommendation of a synthetics phonics advocate. It was briefing no.18, Explicit instruction for students with special learning needs. At the time, I wasn’t aware that ‘explicit instruction’ had any particular significance in education – other than denoting instruction that was explicit. And that could involve anything from a teacher walking round the room checking that students understood what they were doing, to ‘talk and chalk’, reading a book or computer-aided learning. The briefing left me feeling bemused. It was packed with implicit assumptions and the references, presented online presumably for reasons of space, included one self-citation, a report that reached a different conclusion to the briefing, a 400-page book by John Hattie that doesn’t appear to reach the same conclusion either, and a paper by Kirschner Sweller and Clark that doesn’t mention children with special educational needs, The references form a useful reading list for teachers, but hardly constitute robust evidence for support the briefing’s conclusions.

My curiosity piqued, I took a look at another briefing, no.33 on behavioural optometry. I chose it because the SP advocates I’d encountered tended to be sceptical about visual impairments being a causal factor in reading difficulties, and I wondered what evidence they were relying on. I knew a bit about visual problems because of my son’s experiences. The briefing repeatedly lumped together things that should have been kept distinct and came to different conclusions to the evidence it cites. I think I was probably unlucky with these first two because some of the other briefings look fine. So what about the one on Direct Instruction, briefing no.39?

Direct Instruction and Project Follow Through

Direct Instruction (capitalized) is a now commercially available scripted learning programme developed by Siegfried Engelmann and Wesley Becker in the US in the 1960s that performed outstandingly well in Project Follow Through (PFT).

The DI programme involved the scripted teaching of reading, arithmetic, and language to children between kindergarten and third grade. The PFT evaluation of DI showed significant gains in basic skills (word knowledge, spelling, language and math computation); in cognitive-conceptual skills (reading comprehension, math concepts, math problem solving) and in affect measures (co-operation, self-esteem, intellectual achievement, responsibility). A high school follow-up study by the sponsors of the DI programme showed that was associated with positive long-term outcomes.

The Twitter discussion revolved around what I meant by ‘basic’ and ‘skills’. To clarify, as I understand it the DI programme itself involved teaching basic skills (reading, arithmetic, language) to quite young children (K-3). The evaluation assessed basic skills, cognitive-conceptual skills and affect measures. There is no indication in the evidence I’ve been able to access of how sophisticated the cognitive-conceptual skills or affect measures were. One would expect them to be typical of children in the K-3 age range. And we don’t know how long those outcomes persisted. The only evidence for long-term positive outcomes is from a study by the programme sponsors – not to be discounted, but not a reliable enough to form the basis for a pedagogical method.

In other words, the PFT evaluation tells us that there were several robust positive outcomes from the DI programme. What it doesn’t tell us is whether the DI approach has the same robust outcomes if applied to other areas of the curriculum and/or with older children. Because the results of the evaluation are aggregated, it doesn’t tell us whether the DI programme benefitted all children or only some, or if it had any negative effects, or what the outcomes were for children with specific special educational needs or learning difficulties – the focus of MUSEC. Nor does it tell us anything about the use of direct instruction in general – what the briefing describes as a “generic overarching concept, with DI as a more specific exemplar”.

the evidence

The briefing refers to “a large body of research evidence stretching back over four decades testifying to the efficacy of explicit/direct instruction methods including the specific DI programs.” So what is the evidence?

The briefing itself refers only to the PFT evaluation of the DI programme. The references, available online consist of:

• a summary of findings written by the authors of the DI programme, Becker & Engelmann,
• a book about DI – the first two authors were Engelmann’s students and worked on the original DI programme,
• an excerpt from the same book on a commercial site called education.com,
• an editorial from a journal called Effective School Practices, previously known as Direct Instruction News and published by the National Institute for Direct Instruction (Chairman S Engelmann)
• a paper about the different ways in which direct instruction is understood, published by the Center on Innovation and Improvement which is administered by the Academic Development Institute, one of whose partners is Little Planet Learning,
• the 400-page book referenced by briefing 18,
• the peer-reviewed paper also referenced by briefing 18.

The references, which I think most people would construe as evidence, include only one peer-reviewed paper. It cites research findings supporting the use of direct instruction in relation to particular types of material, but doesn’t mention children with special needs or learning difficulties. Another reference is a synthesis of peer-reviewed studies. All the other references involve organisations with a commercial interest in educational methods – not the sort of evidence I’d expect to see in a briefing published by a university.

My recommendation for the MUSEC briefings? Approach with caution.

the new traditionalists: there’s more to d.i. than meets the eye, too

A few years ago, mystified by the way my son’s school was tackling his reading difficulties, I joined the TES forum and discovered I’d missed The Reading Wars. Well, not quite. They began before I started school and show no sign of ending any time soon. But I’d been blissfully unaware that they’d been raging around me.

On one side in the Reading Wars are advocates of a ‘whole language’ approach to learning to read – focusing on reading strategies and meaning – and on the other are advocates of teaching reading using phonics. Phonics advocates see their approach as evidence-based, and frequently refer to the whole language approach (using ‘mixed methods’) as based on ideology.

mixed methods

Most members of my family learned to read successfully using mixed methods. I was trained to teach reading using mixed methods and all the children I taught learned to read. My son, taught using synthetic phonics, struggled with reading and eventually figured it out for himself using whole word recognition. Hence my initial scepticism about SP. I’ve since changed my mind, having discovered that my son’s SP programme wasn’t properly implemented and after learning more about how the process of reading works. If I’d relied only on the scientific evidence cited as supporting SP, I wouldn’t have been convinced. Although it clearly supports SP as an approach to decoding, the impact on literacy in general isn’t so clear-cut.

ideology

I’ve also found it difficult to pin down the ideology purported to be at the root of whole language approaches. An ideology is a set of abstract ideas or values based on beliefs rather than on evidence, but the reasons given for the use of mixed methods when I was learning to read and when I was being trained to teach reading were pragmatic ones. In both instances, mixed methods were advocated explicitly because (analytic) phonics alone hadn’t been effective for some children, and children had been observed to use several different strategies during reading acquisition.

The nearest I’ve got to identifying an ideology are the ideas that language frames and informs people’s worldviews and that social and economic power plays a significant part in determining who teaches what to whom. The implication is that teachers, schools, school boards, local authorities or government don’t have a right to impose on children the way they construct their knowledge. To me, the whole language position looks more like a theoretical framework than an ideology, even if the theory is debatable.

the Teaching Wars

The Reading Wars appear to be but a series of battles in a much bigger war over what’s often referred to as traditional vs progressive teaching methods. The new traditionalists frequently characterise the Teaching Wars along the same lines as SP proponents characterise the Reading Wars; claiming that traditional methods are supported by scientific evidence, but ideology is the driving force behind progressive methods. Even a cursory examination of this claim suggests it’s a caricature of the situation rather than an accurate summary.

The progressives’ ideology
Rousseau is often cited as the originator of progressive education and indeed, progressive methods sometimes resemble the approach he advocated. However, many key figures in progressive education such as Herbert Spencer, John Dewey and Jean Piaget derived their methods from what was then state-of-the-art scientific theory and empirical observation, not from 18th century Romanticism.

The traditionalists’ scientific evidence The evidence cited by the new traditionalists appears to consist of a handful of findings from cognitive psychology and information science. They’re important findings, they should form part of teacher training and they might have transformed the practice of some teachers, but teaching and learning involves more than cognition. Children’s developing brains and bodies, their emotional and social background, the social, economic and political factors shaping the expectations on teachers and students in schools, and the philosophical frameworks of everybody involved suggest that evidence from many other scientific fields should also be informing educational theory, and that it might be risky to apply a few findings out of context.

I can understand the new traditionalists’ frustration. One has to ask why education theory hasn’t kept up to date with research in many fields that are directly relevant to teaching, learning, child development and the structure of the education system itself. However, dissatisfaction with progressive methods appears to originate, not so much with the methods themselves, as with the content of the curriculum and with progressive methods being taken to extremes.

keeping it simple

The limited capacity of working memory is the feature of human cognitive architecture that underpins Kirschner, Sweller and Clark’s argument in favour of direct instruction. One outcome of that limitation is a human tendency to oversimplify information by focusing on the prototypical features of phenomena – a tendency that often leads to inaccurate stereotyping. Kirschner, Sweller and Clark present their hypothesis in terms of a dispute between two ‘sides’ one advocating minimal guidance and the other a full explanation of concepts, procedures and strategies (p.75).

Although it’s appropriate in experimental work to use extreme examples of these approaches in order to test a hypothesis, the authors themselves point out that in a classroom setting most teachers using progressive methods provide students with considerable guidance anyway (p.79). Their conclusion that the most effective way to teach novices is through “direct, strong, instructional guidance” might be valid, but in respect of the oversimplified way they frame the dispute, they appear to have fallen victim to the very limitations of human cognitive architecture to which they draw our attention.

The presentation of the Teaching Wars in this polarised manner goes some way to explaining why direct instruction seems like such a big deal for the new traditionalists. Direct instruction shouldn’t be confused with Direct Instruction (capitalised) – the scripted teaching used in Engelmann & Becker’s DISTAR programme – although a recent BBC Radio 4 programme suggests that might be exactly what’s happening in some quarters.

direct instruction

The Radio 4 programme How do children learn history? is presented by Adam Smith, a senior lecturer in history at University College London, who has blogged about the programme here. He’s carefully non-committal about the methods he describes – it is the BBC after all.

A frequent complaint about the way the current national curriculum approaches history is what’s included, what’s excluded, what’s emphasised and what’s not. At home, we’ve had to do some work on timelines because although both my children have been required to put themselves into the shoes of various characters throughout history (an exercise my son has grown to loathe), neither of them knew how the Ancient Egyptians, Greeks, Romans, Vikings or Victorians related to each other – a pretty basic historical concept. But those are curriculum issues, rather than methods issues. As well as providing a background to the history curriculum debate, the broadcast featured two lessons that used different pedagogical approaches.

During an ‘inquiry’ lesson on Vikings, presented as a good example of current practice, groups of children were asked to gather information about different aspects of Viking life. A ‘direct instruction’ lesson on Greek religious beliefs, by contrast, involved the teacher reading from a textbook whilst the children followed the text in their own books with their finger, then discussed the text and answered comprehension questions on it. The highlight of the lesson appeared to be the inclusion of an exclamation mark in the text.

It’s possible that the way the programme was edited oversimplified the lesson on Greek religious beliefs, or that the children in the Viking lesson were older than those in the Greek lesson and better able to cope with ‘inquiry’, but there are clearly some possible pitfalls awaiting those who learn by relying on the content of a single textbook. The first is that whoever publishes the textbook controls the knowledge – that’s a powerful position to be in. The second is that you don’t need much training to be able to read from a textbook or lead a discussion about what’s in it – that has implications for who is going to be teaching our children. The third is how children will learn to question what they’re told. I’m not trying to undermine discipline in the classroom, just pointing out that textbooks can be, and sometimes are, wrong. The sooner children learn that authority lies in evidence rather than in authority figures, the better. Lastly, as a primary school pupil I would have found following a teacher reading from a textbook tedious in the extreme. As a secondary school pupil it was a teacher reading from a textbook for twenty minutes that clinched my decision to drop history as soon possible. I don’t think I’d be alone in that.

who are the new traditionalists?

The Greek religions lesson was part of a project funded by the Education Endowment Foundation (EEF), a charity developed by the Sutton Trust and the Impetus Trust in 2011 with a grant from the DfE. The EEF’s remit is to fund research into interventions aimed at improving the attainment of pupils receiving free school meals. The intervention featured in How do children learn history? is being implemented in Future Academies in central London. I think the project might be the one outlined here, although this one is evaluating the use of Hirsch’s Core Knowledge framework in literacy, rather than in history, which might explain the focus on extracting meaning from the text.

My first impression of the traditionalists was that they were a group of teachers disillusioned by the ineffectiveness of the pedagogical methods they were trained to use, who’d stumbled across some principles of cognitive science they’d found invaluable and were understandably keen to publicise them. Several of the teachers are Teach First graduates and work in academies or free schools – not surprising if they want freedom to innovate. They also want to see pedagogical methods rigorously evaluated, and the most effective ones implemented in schools. But those teachers aren’t the only parties involved.

Religious groups have welcomed the opportunities to open faith schools and develop their own curricula – a venture supported by previous and current governments despite past complications resulting from significant numbers of schools in England being run by churches and the current investigation into the alleged operation Trojan Horse in Birmingham.

Future, the sponsors of Future Academies and the Curriculum Centre, was founded by John and Caroline Nash, a former private equity specialist and stockbroker respectively. Both are reported to have made significant donations to the Conservative party. John Nash was appointed Parliamentary Under Secretary of State for Schools in January 2013. The Nashes are co-chairs of the board of governors of Pimlico Academy and Caroline Nash is chair of The Curriculum Centre. All four trustees of the Future group are from the finance industry.

Many well-established independent schools, notably residential schools for children with special educational needs and disabilities, are now controlled by finance companies. This isn’t modern philanthropy in action; the profits made from selling on the school chains, the magnitude of the fees charged to local authorities, and the fact that the schools are described as an ‘investment’, suggests that another motivation is at work.

A number of publishers of textbooks got some free product placement in a recent speech by Elizabeth Truss, currently parliamentary Under Secretary of state for Education and Childcare.

Educational reform might have teachers in the vanguard, but there appear to be some powerful bodies with religious, political and financial interests who might want to ensure they benefit from the outcomes, and have a say in what those outcomes are. The new traditionalist teachers might indeed be on to something with their focus on direct instruction, but if direct instruction boils down in practice to teachers using scripted texts or reading from textbooks, they will find plenty of other players willing to jump on the bandwagon and cash in on this simplistic and risky approach to educating the country’s most vulnerable children. Oversimplification can lead to unwanted complications.

direct instruction: the evidence

A discussion on Twitter raised a lot of questions about working memory and the evidence supporting direct instruction cited by Kirschner, Sweller and Clark. I couldn’t answer in 140 characters, so here’s my response. I hope it covers all the questions.

Kirschner Sweller & Clark’s thesis is;

• working memory capacity is limited
• constructivist, discovery, problem-based, experiential, and inquiry-based teaching (minimal guidance) all overload working memory and
• evidence from studies investigating efficacy of different methods supports the superiority of direct instruction.
Therefore, “In so far as there is any evidence from controlled studies, it almost uniformly supports direct, strong instructional guidance rather than constructivist-based minimal guidance during the instruction of novice to intermediate learners.” (p.83)

Sounds pretty unambiguous – but it isn’t.

1. Working memory (WM) isn’t simple. It includes several ‘dissociable’ sensory buffers and a central executive that monitors, attends to and responds to sensory information, information from the body and information from long term memory (LTM) (Wagner, Bunge & Badre, 2004; Damasio, 2006).

2. Studies comparing minimal guidance with direct instruction are based on ‘pure’ methods. Sweller’s work on cognitive load theory (CLT) (Sweller, 1988) was based on problems involving use of single buffer/loop e.g. mazes, algebra. New items coming into the buffer displace older items, so buffer capacity would be limiting factor. But real-world problems tend to involve different buffers, so items in the buffers can be easily maintained while they are manipulated by the central executive. For example, I can’t write something complex and listen to Radio 4 at the same time because my phonological loop can’t cope. But I can write and listen to music, or listen to Radio 4 whilst I cook a new recipe because I’m using different buffers. Discovery, problem-based, experiential, and inquiry-based teaching in classrooms tends to more closely resemble real world situations than the single-buffer problems used by Sweller to demonstrate the concept of cognitive load, so the impact of the buffer limit would be lessened.

3. For example, Klahr & Nigam (2004) point out that because there’s no clear definition of discovery learning, in their experiment involving a scientific concept they ‘magnified the difference between the two instructional treatments’ – ie used an ‘extreme type’ of both methods – that’s unlikely to occur in any classroom. Essentially they disproved the hypothesis that children always learn better by discovering things for themselves; but children are unlikely to ‘discover things for themselves’ in circumstances like those in the Klahr & Nigam study.

It’s worth noting that 8 of the children in their study figured out what to do at the outset, so were excluded from the results. And 23% of the direct instruction children didn’t master the concept well enough to transfer it.

That finding – that some learners failed to learn even when direct instruction was used, and that some learners might benefit from less direct instruction, comes up time and again in the evidence cited by Kirschner, Sweller and Clark, but gets overlooked in their conclusion.

I can quite see why educational methods using ‘minimal instruction’ might fail, and agree that proponents of such methods don’t appear to have taken much notice of such research findings as there are. But the findings are not unambiguous. It might be true that the evidence ‘almost uniformly supports direct, strong instructional guidance rather than constructivist-based minimal guidance during the instruction of novice to intermediate learners’ [my emphasis] but teachers aren’t faced with that forced choice. Also the evidence doesn’t show that direct, strong instructional guidance is always effective for all learners. I’m still not convinced that Kirschner, Sweller & Clark’s conclusion is justified.


References

Damasio, A (2006) Descartes’ Error. Vintage Books
Klahr, D & Klahr, D, & Nigam, M. (2004). The equivalence of learning paths in early
science instruction: Effects of direct instruction and discovery learning.
Psychological Science, 15, 661–667.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning.
Cognitive Science, 12, 257–285.
Wagner, A.D., Bunge, S.A. & Badre, D. (2004). Cognitive control, semantic memory and priming: Contributions from prefontal cortex. In M. S. Gazzaniga (Ed.) The Cognitive Neurosciences (3rd edn.). Cambridge, MA: MIT Press.

education reform: evaluating the evidence

Over the last few weeks I’ve found myself moving from being broadly sympathetic to educational ‘reform’ to being quite critical of it. One comment on my blog was “You appear to be doing that thing where you write loads, but it is hard to identify any clear points.” Point taken. I’ll see what I can do in this post.

my search for the evidence

I’ve been perplexed by the ideas underpinning the current English education system since my children started encountering problems with it about a decade ago. After a lot of searching, I came to the conclusion that the entire system was lost in a constructivist wilderness. I joined the TES forum to find out more, and discovered that on the whole, teachers weren’t – lost, that is. I came across references to evidence-based educational research and felt hopeful.

Some names were cited; Engelmann, Hirsch, Hattie, Willingham. I pictured a growing body of rigorous research and searched for the authors’ work. Apart from Hattie’s, I couldn’t find much. Willingham was obviously a cognitive psychologist but I couldn’t find his research either. I was puzzled. Most of the evidence seemed to come from magazine articles and a few large-scale studies – notorious for methodological problems. I then heard about Daisy Christodoulou’s book Seven Myths about Education and thought that might give me some pointers. I searched her blog.

In one post, Daisy cites work from the field of information theory by Kirschner, Sweller & Clark, Herb Simon and John Anderson. I was familiar with the last two researchers, but couldn’t open the Simon papers and Anderson’s seemed a bit technical for a general readership. I hadn’t come across the Kirschner, Sweller and Clark reference so I read it. I could see what they were getting at, but thought their reasoning was flawed.

Then it dawned on me. This was the evidence bit of the evidence-based research. It consisted of some early cognitive science/information theory, some large-scale studies and a meta-analysis, together with a large amount of opinion. To me that didn’t constitute a coherent body of evidence. But I was told that there was more to it, which is why I attended the ResearchED conference last weekend. There was more to it, but the substantial body of research didn’t materialise. So where does that leave me?

I still agree with some points that the educational reformers make;

• English-speaking education systems are dominated by constructivist pedagogical approaches
• the implementation of ‘minimal guidance’ approaches has failed to provide children with a good education
• we have a fairly reliable, valid body of knowledge about the world and children should learn about it
• skills tend to be domain-specific
• cognitive science can tell us a lot about how children learn
• the capacity of working memory is limited
• direct instruction is an effective way of teaching.

But I have several reservations that make me uneasy about the education reform ‘movement’.

1. the evidence.

Some is cited frequently. Here’s a summary.

If I’ve understood it correctly, Engelmann and Becker’s DISTAR programme (Direct Instruction System for Teaching Arithmetic and Reading) had far better outcomes for basic maths and reading, higher order cognitive skills (in reading and maths) and responsibility and self-esteem than any other programme in the Project Follow-Through evaluation carried out in 1977.

At around the same time, ED Hirsch had realised that his students’ comprehension of texts was impaired by their poor general knowledge, and in 1983 he published an outline of his concept of what he called ‘cultural literacy’.

A couple of decades later, Daniel Willingham, a cognitive psychologist, started to apply theory from cognitive science to education.

In 2008, John Hattie published Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement – the result of 15 years’ work. The effect sizes Hattie found for various educational factors are ranked here.

Kirschner, Sweller and Clark’s
2006 paper Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching is also often cited. John Sweller developed the concept of ‘cognitive load’ in the 1980s, based on the limited capacity of working memory.

2. the conclusions that can be drawn from the evidence

The DISTAR programme, often referred to as Direct Instruction (capitalised), is clearly very effective for teaching basic maths and literacy. This is an outcome not to be sniffed at, so it would be worth exploring why DISTAR hasn’t been more widely adopted. Proponents of direct instruction often claim it’s because of entrenched ideological opposition; it might also be to do with the fact that it’s a proprietary programme, that teacher input is highly constrained, and that schools have to teach more than basic maths and literacy.

ED Hirsch’s observation that students need prior knowledge before they can comprehend texts involving that knowledge is a helpful one, but has more to say about curriculum design than pedagogy. There are some major issues around all schools using an identical curriculum, who controls the content and how children’s knowledge of the curriculum is assessed.

Daniel Willingham
has written extensively on how findings from cognitive science can be applied to education. Cognitive science is clearly a rich source of useful information. The reason I couldn’t find his research (mainly about procedural memory) appears to be because at some point he changed his middle initial from B to T. I’d assumed it was by someone else.

Although I have doubts about Kirschner Sweller and Clark’s paper, again the contribution from cognitive science is potentially valuable.

John Hattie’s meta-analyses provide some very useful insights into the effectiveness of educational influences.

The most substantial bodies of evidence cited are clearly cognitive science and Hattie’s meta-analyses, which provide a valuable starting point for further exploration of the influences he ranks. Those are my conclusions.

But other conclusions are being drawn – often that the evidence cited above supports the view that direct instruction is the most effective way of teaching and that traditional educational methods (however they are defined) are superior to progressive ones (however they are defined). Those conclusions seem to me to be using the evidence to support beliefs about educational methods, rather than deriving beliefs about educational methods from the evidence.

3. who’s evaluating the evidence?

A key point made by proponents of direct instruction is that students need to have knowledge before they can do anything effective with it. Obviously they do. But this principle appears to be being overlooked by the very people who are emphasizing it.

If you want to understand and apply findings from a meta-analysis you need to be aware of common problems with meta-analyses, how reliable they are, what you need to bear in mind about complex constructs etc. You don’t need to have read everything there is to read about meta-analyses, just to be aware of potential pitfalls. If you want to apply findings from cognitive science, it would help to have at least a broad overview of cognitive science first. That’s because, if you don’t have much prior knowledge, you have no way of knowing how reliable or valid information is. If it’s from a peer-reviewed paper, there’s a good chance it’s reliable because the reviewers would have looked at the theory, the data, the analysis and conclusions. How valid it is (ie how well it maps on to the real world) is another matter. I want to look at some of what ED Hirsch has written to illustrate the point.

Hirsch on psychology and science

Hirsch’s work is often referred to by education reformers. I think he’s right to emphasise the importance of students’ knowledge and I’m impressed by his Core Knowledge framework. There’s now a UK version (slightly less impressive) and his work has influenced the new English National Curriculum. But when I started to check out some of what Hirsch has written I was disconcerted to find that he doesn’t seem to practice what he preaches. In an article in Policy Review he sets out seven ‘reliable general principles’ derived from cognitive science to guide teachers. The principles are sound, even if he has misconstrued ‘chunking’ and views rehearsal as a ‘disagreeable need’.

But Hirsch’s misunderstanding of the history of psychology suggests that not everything he says about psychology might be entirely reliable. He says;

Fifty years ago [the article is dated 2002] psychology was dominated by the guru principle. One declared an allegiance to B.F. Skinner and behaviorism, or to Piaget and stage theory, or to Vygotsky and social theory. Today, by contrast, a new generation of “cognitive scientists,” while duly respectful of these important figures, have leavened their insights with further evidence (not least, thanks to new technology), and have been able to take a less speculative and guru-dominated approach. This is not to suggest that psychology has now reached the maturity and consensus level of solid-state physics. But it is now more reliable than it was, say, in the Thorndike era with its endless debates over “transfer of training.””

This paragraph is riddled with misconceptions. Skinner was indeed an influential psychologist, but behaviourism was controversial – Noam Chomsky was a high profile critic. Piaget was influential in educational circles – but children’s cognitive development formed one small strand of the wide range of areas being investigated by psychologists. Vygotsky’s work has also been influential in education, but it didn’t become widely known in the West until after the publication in 1978 of Mind in Society – a collection of his writings translated into English – so he couldn’t have had ‘guru’ status in psychology in the 1950s. And to suggest that cognitive scientists are ‘duly respectful’ of Skinner, Piaget and Vygotsky as ‘important figures’ in their field, suggests a complete misunderstanding of the roots of cognitive science and of what matters to cognitive scientists. But you wouldn’t be able to question what Hirsch is saying if you had no prior information. And in this article, Hirsch doesn’t support his assertions with references, so you couldn’t check them out.

In a conference address that also forms a chapter in book entitled The Great Curriculum Debate, Hirsch attributes progressive educational methods to the Romantic movement and in turn to religious beliefs, completely overlooking the origins in psychological research of ‘progressive’ educational methodologies and, significantly, the influence of Freud’s work.

In the grand scheme of things, of course, Hirsch’s view of psychology in the 1950s, or his view of the origins of progressive education don’t matter that much. What does matter is that Hirsch himself is seen as something of a guru largely because of his emphasis on students needing to have sound prior knowledge, but here he clearly hasn’t checked out his own.

What’s more important is Hirsch’s view of science. In the last section of his essay Classroom research and cargo cults, entitled ‘on convergence and consensus’, in which he compares classroom research with that from cognitive psychology, he says “independent convergence has always been the hallmark of dependable science“. That’s true in the sense that if several researchers approaching a problem from different directions all come to the same conclusion, they would be reasonably confident that their conclusion was a valid one.

Hirsch illustrates the role of convergence using the example of germ theory. He says “in the nineteenth century, for example, evidence from many directions converged on the germ theory of disease. Once policymakers accepted that consensus, hospital operating rooms, under penalty of being shut down, had to meet high standards of cleanliness.” What’s interesting is that Hirsch slips, almost imperceptibly, from ‘convergence’ into ‘consensus’. In scientific research, convergence is important, but consensus can be extremely misleading because it can be, and often has been, wrong. Ironically, not long before high standards of cleanliness were imposed on hospitals, the consensus had been that cross-contamination theory was wrong, as Semmelweis discovered to his cost. Reliable findings aren’t the same as valid ones.

Hirsch then goes on to say “What policymakers should demand from the [education] research community is consensus.” No they shouldn’t. Consensus can be wrong. What policymakers need to demand from education research is methodological rigour. We already have the relevant expertise, it just needs to be applied to education. Again, if you have no frame of reference against which you can evaluate what Hirsch is saying, you’d be quite likely to assume that he’s right about convergence and consensus – and you’d be none the wiser about the importance of good research design.

what the teachers say

I’m genuinely enthusiastic about teachers wanting to base their practice on evidence. I recognize that this is a work in progress and it’s only just begun. I can quite understand why someone whose teaching has been transformed by a finding from cognitive science might want to share that information as widely as possible. But ironically, some of the teachers involved appear to be doing exactly the opposite of what they recommend teachers do with students.

If you’re not familiar with a knowledge domain, but want to use findings from it, it’s worth getting an overview of it first. This doesn’t involve learning loads of concrete facts, it involves getting someone with good domain knowledge to give you an outline of how it works, so you can see how the concrete facts fit in. It also involves making sure you know what domain-specific skills are required to handle the concrete facts, and whether or not you have them. It also means not making overstated claims. Applying seven principles from cognitive science means you are applying seven principles from cognitive science. That’s all. It’s important to avoid making claims that aren’t supported by the evidence.

What struck me about the supporters of educational reform is that science teachers are noticeable by their absence. Most of the complaints about progressive education seem to relate to English, Mathematics and History. These are all fields that deal with highly abstracted information that is especially vulnerable to constructivist worldviews, so they might have been disproportionately influenced by ‘minimal guidance’ methods. It’s a bit more difficult to take an extreme constructivist approach to physics, chemistry, biology or physical geography because reality tends to intervene quite early on. The irony is that science teachers might be in a better position than teachers of English, Maths or History to evaluate evidence from educational research. And psychology teachers and educational psychologists would have the relevant domain knowledge, which would help avoid reinventing the wheel. I’d recommend getting some of them on board.